00:00:00.001 Started by upstream project "autotest-per-patch" build number 127115 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.142 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.052 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.062 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.076 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.076 > git config core.sparsecheckout # timeout=10 00:00:06.089 > git read-tree -mu HEAD # timeout=10 00:00:06.105 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.160 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.160 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.281 [Pipeline] Start of Pipeline 00:00:06.297 [Pipeline] library 00:00:06.299 Loading library shm_lib@master 00:00:06.300 Library shm_lib@master is cached. Copying from home. 00:00:06.318 [Pipeline] node 00:00:06.339 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.341 [Pipeline] { 00:00:06.354 [Pipeline] catchError 00:00:06.355 [Pipeline] { 00:00:06.369 [Pipeline] wrap 00:00:06.380 [Pipeline] { 00:00:06.387 [Pipeline] stage 00:00:06.388 [Pipeline] { (Prologue) 00:00:06.563 [Pipeline] sh 00:00:06.855 + logger -p user.info -t JENKINS-CI 00:00:06.873 [Pipeline] echo 00:00:06.875 Node: CYP11 00:00:06.882 [Pipeline] sh 00:00:07.188 [Pipeline] setCustomBuildProperty 00:00:07.201 [Pipeline] echo 00:00:07.202 Cleanup processes 00:00:07.207 [Pipeline] sh 00:00:07.491 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.491 491417 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.505 [Pipeline] sh 00:00:07.796 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.796 ++ grep -v 'sudo pgrep' 00:00:07.796 ++ awk '{print $1}' 00:00:07.796 + sudo kill -9 00:00:07.796 + true 00:00:07.812 [Pipeline] cleanWs 00:00:07.824 [WS-CLEANUP] Deleting project workspace... 00:00:07.824 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.831 [WS-CLEANUP] done 00:00:07.836 [Pipeline] setCustomBuildProperty 00:00:07.852 [Pipeline] sh 00:00:08.138 + sudo git config --global --replace-all safe.directory '*' 00:00:08.228 [Pipeline] httpRequest 00:00:08.251 [Pipeline] echo 00:00:08.252 Sorcerer 10.211.164.101 is alive 00:00:08.262 [Pipeline] httpRequest 00:00:08.268 HttpMethod: GET 00:00:08.269 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.269 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.272 Response Code: HTTP/1.1 200 OK 00:00:08.273 Success: Status code 200 is in the accepted range: 200,404 00:00:08.273 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.910 [Pipeline] sh 00:00:09.208 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.224 [Pipeline] httpRequest 00:00:09.243 [Pipeline] echo 00:00:09.245 Sorcerer 10.211.164.101 is alive 00:00:09.256 [Pipeline] httpRequest 00:00:09.262 HttpMethod: GET 00:00:09.262 URL: http://10.211.164.101/packages/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:09.263 Sending request to url: http://10.211.164.101/packages/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:09.266 Response Code: HTTP/1.1 200 OK 00:00:09.267 Success: Status code 200 is in the accepted range: 200,404 00:00:09.268 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:24.621 [Pipeline] sh 00:00:24.909 + tar --no-same-owner -xf spdk_415e0bb41315fc44ebe50dae04416ef4e2760778.tar.gz 00:00:27.473 [Pipeline] sh 00:00:27.769 + git -C spdk log --oneline -n5 00:00:27.769 415e0bb41 pkgdep/git: Add extra libnl-genl dev package to QAT's dependencies 00:00:27.769 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:27.769 50222f810 configure: don't exit on non Intel platforms 00:00:27.769 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:27.769 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:27.782 [Pipeline] } 00:00:27.800 [Pipeline] // stage 00:00:27.811 [Pipeline] stage 00:00:27.813 [Pipeline] { (Prepare) 00:00:27.832 [Pipeline] writeFile 00:00:27.850 [Pipeline] sh 00:00:28.138 + logger -p user.info -t JENKINS-CI 00:00:28.152 [Pipeline] sh 00:00:28.439 + logger -p user.info -t JENKINS-CI 00:00:28.453 [Pipeline] sh 00:00:28.740 + cat autorun-spdk.conf 00:00:28.740 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.740 SPDK_TEST_NVMF=1 00:00:28.740 SPDK_TEST_NVME_CLI=1 00:00:28.740 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.740 SPDK_TEST_NVMF_NICS=e810 00:00:28.740 SPDK_TEST_VFIOUSER=1 00:00:28.740 SPDK_RUN_UBSAN=1 00:00:28.740 NET_TYPE=phy 00:00:28.748 RUN_NIGHTLY=0 00:00:28.753 [Pipeline] readFile 00:00:28.781 [Pipeline] withEnv 00:00:28.784 [Pipeline] { 00:00:28.798 [Pipeline] sh 00:00:29.086 + set -ex 00:00:29.086 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:29.086 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.086 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.086 ++ SPDK_TEST_NVMF=1 00:00:29.086 ++ SPDK_TEST_NVME_CLI=1 00:00:29.086 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.086 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.086 ++ SPDK_TEST_VFIOUSER=1 00:00:29.086 ++ SPDK_RUN_UBSAN=1 00:00:29.086 ++ NET_TYPE=phy 00:00:29.086 ++ RUN_NIGHTLY=0 00:00:29.086 + case $SPDK_TEST_NVMF_NICS in 00:00:29.086 + DRIVERS=ice 00:00:29.086 + [[ tcp == \r\d\m\a ]] 00:00:29.086 + [[ -n ice ]] 00:00:29.086 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.086 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:29.086 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:29.086 rmmod: ERROR: Module irdma is not currently loaded 00:00:29.086 rmmod: ERROR: Module i40iw is not currently loaded 00:00:29.086 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:29.086 + true 00:00:29.086 + for D in $DRIVERS 00:00:29.086 + sudo modprobe ice 00:00:29.086 + exit 0 00:00:29.097 [Pipeline] } 00:00:29.114 [Pipeline] // withEnv 00:00:29.120 [Pipeline] } 00:00:29.137 [Pipeline] // stage 00:00:29.148 [Pipeline] catchError 00:00:29.150 [Pipeline] { 00:00:29.164 [Pipeline] timeout 00:00:29.164 Timeout set to expire in 50 min 00:00:29.166 [Pipeline] { 00:00:29.178 [Pipeline] stage 00:00:29.180 [Pipeline] { (Tests) 00:00:29.196 [Pipeline] sh 00:00:29.484 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.484 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.484 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.484 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:29.484 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.484 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.484 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:29.484 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.484 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.484 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.485 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:29.485 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.485 + source /etc/os-release 00:00:29.485 ++ NAME='Fedora Linux' 00:00:29.485 ++ VERSION='38 (Cloud Edition)' 00:00:29.485 ++ ID=fedora 00:00:29.485 ++ VERSION_ID=38 00:00:29.485 ++ VERSION_CODENAME= 00:00:29.485 ++ PLATFORM_ID=platform:f38 00:00:29.485 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.485 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.485 ++ LOGO=fedora-logo-icon 00:00:29.485 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.485 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.485 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.485 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.485 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.485 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.485 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.485 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.485 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.485 ++ SUPPORT_END=2024-05-14 00:00:29.485 ++ VARIANT='Cloud Edition' 00:00:29.485 ++ VARIANT_ID=cloud 00:00:29.485 + uname -a 00:00:29.485 Linux spdk-cyp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.485 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:32.787 Hugepages 00:00:32.787 node hugesize free / total 00:00:32.787 node0 1048576kB 0 / 0 00:00:32.787 node0 2048kB 0 / 0 00:00:32.787 node1 1048576kB 0 / 0 00:00:32.787 node1 2048kB 0 / 0 00:00:32.787 00:00:32.787 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.787 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:32.787 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:32.787 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:32.787 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:32.787 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:32.787 + rm -f /tmp/spdk-ld-path 00:00:32.787 + source autorun-spdk.conf 00:00:32.787 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.787 ++ SPDK_TEST_NVMF=1 00:00:32.787 ++ SPDK_TEST_NVME_CLI=1 00:00:32.787 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.787 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.787 ++ SPDK_TEST_VFIOUSER=1 00:00:32.787 ++ SPDK_RUN_UBSAN=1 00:00:32.787 ++ NET_TYPE=phy 00:00:32.787 ++ RUN_NIGHTLY=0 00:00:32.787 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.787 + [[ -n '' ]] 00:00:32.787 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.049 + for M in /var/spdk/build-*-manifest.txt 00:00:33.049 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.049 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.049 + for M in /var/spdk/build-*-manifest.txt 00:00:33.049 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.049 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.049 ++ uname 00:00:33.049 + [[ Linux == \L\i\n\u\x ]] 00:00:33.049 + sudo dmesg -T 00:00:33.049 + sudo dmesg --clear 00:00:33.049 + dmesg_pid=492513 00:00:33.049 + [[ Fedora Linux == FreeBSD ]] 00:00:33.049 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.049 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.049 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.049 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.049 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.049 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.049 + sudo dmesg -Tw 00:00:33.049 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.049 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.049 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.049 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.049 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.049 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.049 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.049 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.049 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.049 Test configuration: 00:00:33.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.049 SPDK_TEST_NVMF=1 00:00:33.049 SPDK_TEST_NVME_CLI=1 00:00:33.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.049 SPDK_TEST_NVMF_NICS=e810 00:00:33.049 SPDK_TEST_VFIOUSER=1 00:00:33.049 SPDK_RUN_UBSAN=1 00:00:33.049 NET_TYPE=phy 00:00:33.049 RUN_NIGHTLY=0 22:48:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.049 22:48:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.049 22:48:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.049 22:48:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.049 22:48:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.049 22:48:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.049 22:48:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.049 22:48:50 -- paths/export.sh@5 -- $ export PATH 00:00:33.049 22:48:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.049 22:48:50 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.049 22:48:50 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:33.049 22:48:50 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721854130.XXXXXX 00:00:33.049 22:48:50 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721854130.Tl6nuS 00:00:33.049 22:48:50 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:33.049 22:48:50 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:33.049 22:48:50 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.049 22:48:50 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.049 22:48:50 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.049 22:48:50 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:33.049 22:48:50 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:33.049 22:48:50 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.049 22:48:50 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.049 22:48:50 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:33.049 22:48:50 -- pm/common@17 -- $ local monitor 00:00:33.049 22:48:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.049 22:48:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.049 22:48:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.049 22:48:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.049 22:48:50 -- pm/common@21 -- $ date +%s 00:00:33.049 22:48:50 -- pm/common@25 -- $ sleep 1 00:00:33.049 22:48:50 -- pm/common@21 -- $ date +%s 00:00:33.049 22:48:50 -- pm/common@21 -- $ date +%s 00:00:33.049 22:48:50 -- pm/common@21 -- $ date +%s 00:00:33.049 22:48:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721854130 00:00:33.049 22:48:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721854130 00:00:33.049 22:48:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721854130 00:00:33.049 22:48:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721854130 00:00:33.310 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721854130_collect-vmstat.pm.log 00:00:33.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721854130_collect-cpu-load.pm.log 00:00:33.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721854130_collect-cpu-temp.pm.log 00:00:33.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721854130_collect-bmc-pm.bmc.pm.log 00:00:34.253 22:48:51 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:34.253 22:48:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.253 22:48:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.253 22:48:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.253 22:48:51 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.253 Wed Jul 24 08:48:51 PM UTC 2024 00:00:34.253 22:48:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.253 v24.09-pre-312-g415e0bb41 00:00:34.253 22:48:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.253 22:48:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.253 22:48:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.253 22:48:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:34.253 22:48:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:34.253 22:48:51 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.253 ************************************ 00:00:34.253 START TEST ubsan 00:00:34.253 ************************************ 00:00:34.253 22:48:51 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:34.253 using ubsan 00:00:34.253 00:00:34.253 real 0m0.001s 00:00:34.253 user 0m0.000s 00:00:34.253 sys 0m0.000s 00:00:34.253 22:48:51 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:34.253 22:48:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.253 ************************************ 00:00:34.253 END TEST ubsan 00:00:34.253 ************************************ 00:00:34.253 22:48:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.253 22:48:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.253 22:48:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.253 22:48:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.559 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.559 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.821 Using 'verbs' RDMA provider 00:00:50.714 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:02.949 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.950 Creating mk/config.mk...done. 00:01:02.950 Creating mk/cc.flags.mk...done. 00:01:02.950 Type 'make' to build. 00:01:02.950 22:49:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:02.950 22:49:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:02.950 22:49:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:02.950 22:49:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.950 ************************************ 00:01:02.950 START TEST make 00:01:02.950 ************************************ 00:01:02.950 22:49:19 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:02.950 make[1]: Nothing to be done for 'all'. 00:01:03.892 The Meson build system 00:01:03.892 Version: 1.3.1 00:01:03.892 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:03.892 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.892 Build type: native build 00:01:03.892 Project name: libvfio-user 00:01:03.892 Project version: 0.0.1 00:01:03.892 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:03.892 C linker for the host machine: cc ld.bfd 2.39-16 00:01:03.892 Host machine cpu family: x86_64 00:01:03.892 Host machine cpu: x86_64 00:01:03.892 Run-time dependency threads found: YES 00:01:03.892 Library dl found: YES 00:01:03.892 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:03.892 Run-time dependency json-c found: YES 0.17 00:01:03.892 Run-time dependency cmocka found: YES 1.1.7 00:01:03.892 Program pytest-3 found: NO 00:01:03.892 Program flake8 found: NO 00:01:03.892 Program misspell-fixer found: NO 00:01:03.892 Program restructuredtext-lint found: NO 00:01:03.892 Program valgrind found: YES (/usr/bin/valgrind) 00:01:03.892 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.892 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.892 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.893 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.893 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:03.893 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:03.893 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.893 Build targets in project: 8 00:01:03.893 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:03.893 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:03.893 00:01:03.893 libvfio-user 0.0.1 00:01:03.893 00:01:03.893 User defined options 00:01:03.893 buildtype : debug 00:01:03.893 default_library: shared 00:01:03.893 libdir : /usr/local/lib 00:01:03.893 00:01:03.893 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.151 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.410 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:04.410 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:04.410 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:04.410 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:04.411 [5/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:04.411 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:04.411 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:04.411 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:04.411 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:04.411 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:04.411 [11/37] Compiling C object samples/null.p/null.c.o 00:01:04.411 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:04.411 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:04.411 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:04.411 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:04.411 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:04.411 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:04.411 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:04.411 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:04.411 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:04.411 [21/37] Compiling C object samples/server.p/server.c.o 00:01:04.411 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:04.411 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:04.411 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:04.411 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:04.411 [26/37] Compiling C object samples/client.p/client.c.o 00:01:04.411 [27/37] Linking target samples/client 00:01:04.411 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:04.411 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:04.411 [30/37] Linking target test/unit_tests 00:01:04.411 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:04.671 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:04.671 [33/37] Linking target samples/lspci 00:01:04.671 [34/37] Linking target samples/server 00:01:04.671 [35/37] Linking target samples/gpio-pci-idio-16 00:01:04.671 [36/37] Linking target samples/null 00:01:04.671 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:04.671 INFO: autodetecting backend as ninja 00:01:04.671 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.671 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.932 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.932 ninja: no work to do. 00:01:11.526 The Meson build system 00:01:11.526 Version: 1.3.1 00:01:11.526 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:11.527 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:11.527 Build type: native build 00:01:11.527 Program cat found: YES (/usr/bin/cat) 00:01:11.527 Project name: DPDK 00:01:11.527 Project version: 24.03.0 00:01:11.527 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:11.527 C linker for the host machine: cc ld.bfd 2.39-16 00:01:11.527 Host machine cpu family: x86_64 00:01:11.527 Host machine cpu: x86_64 00:01:11.527 Message: ## Building in Developer Mode ## 00:01:11.527 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:11.527 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:11.527 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:11.527 Program python3 found: YES (/usr/bin/python3) 00:01:11.527 Program cat found: YES (/usr/bin/cat) 00:01:11.527 Compiler for C supports arguments -march=native: YES 00:01:11.527 Checking for size of "void *" : 8 00:01:11.527 Checking for size of "void *" : 8 (cached) 00:01:11.527 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:11.527 Library m found: YES 00:01:11.527 Library numa found: YES 00:01:11.527 Has header "numaif.h" : YES 00:01:11.527 Library fdt found: NO 00:01:11.527 Library execinfo found: NO 00:01:11.527 Has header "execinfo.h" : YES 00:01:11.527 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:11.527 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:11.527 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:11.527 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:11.527 Run-time dependency openssl found: YES 3.0.9 00:01:11.527 Run-time dependency libpcap found: YES 1.10.4 00:01:11.527 Has header "pcap.h" with dependency libpcap: YES 00:01:11.527 Compiler for C supports arguments -Wcast-qual: YES 00:01:11.527 Compiler for C supports arguments -Wdeprecated: YES 00:01:11.527 Compiler for C supports arguments -Wformat: YES 00:01:11.527 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:11.527 Compiler for C supports arguments -Wformat-security: NO 00:01:11.527 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.527 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:11.527 Compiler for C supports arguments -Wnested-externs: YES 00:01:11.527 Compiler for C supports arguments -Wold-style-definition: YES 00:01:11.527 Compiler for C supports arguments -Wpointer-arith: YES 00:01:11.527 Compiler for C supports arguments -Wsign-compare: YES 00:01:11.527 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:11.527 Compiler for C supports arguments -Wundef: YES 00:01:11.527 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.527 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:11.527 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:11.527 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.527 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:11.527 Program objdump found: YES (/usr/bin/objdump) 00:01:11.527 Compiler for C supports arguments -mavx512f: YES 00:01:11.527 Checking if "AVX512 checking" compiles: YES 00:01:11.527 Fetching value of define "__SSE4_2__" : 1 00:01:11.527 Fetching value of define "__AES__" : 1 00:01:11.527 Fetching value of define "__AVX__" : 1 00:01:11.527 Fetching value of define "__AVX2__" : 1 00:01:11.527 Fetching value of define "__AVX512BW__" : 1 00:01:11.527 Fetching value of define "__AVX512CD__" : 1 00:01:11.527 Fetching value of define "__AVX512DQ__" : 1 00:01:11.527 Fetching value of define "__AVX512F__" : 1 00:01:11.527 Fetching value of define "__AVX512VL__" : 1 00:01:11.527 Fetching value of define "__PCLMUL__" : 1 00:01:11.527 Fetching value of define "__RDRND__" : 1 00:01:11.527 Fetching value of define "__RDSEED__" : 1 00:01:11.527 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:11.527 Fetching value of define "__znver1__" : (undefined) 00:01:11.527 Fetching value of define "__znver2__" : (undefined) 00:01:11.527 Fetching value of define "__znver3__" : (undefined) 00:01:11.527 Fetching value of define "__znver4__" : (undefined) 00:01:11.527 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:11.527 Message: lib/log: Defining dependency "log" 00:01:11.527 Message: lib/kvargs: Defining dependency "kvargs" 00:01:11.527 Message: lib/telemetry: Defining dependency "telemetry" 00:01:11.527 Checking for function "getentropy" : NO 00:01:11.527 Message: lib/eal: Defining dependency "eal" 00:01:11.527 Message: lib/ring: Defining dependency "ring" 00:01:11.527 Message: lib/rcu: Defining dependency "rcu" 00:01:11.527 Message: lib/mempool: Defining dependency "mempool" 00:01:11.527 Message: lib/mbuf: Defining dependency "mbuf" 00:01:11.527 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:11.527 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:11.527 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:11.527 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:11.527 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:11.527 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:11.527 Compiler for C supports arguments -mpclmul: YES 00:01:11.527 Compiler for C supports arguments -maes: YES 00:01:11.527 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:11.527 Compiler for C supports arguments -mavx512bw: YES 00:01:11.527 Compiler for C supports arguments -mavx512dq: YES 00:01:11.527 Compiler for C supports arguments -mavx512vl: YES 00:01:11.527 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:11.527 Compiler for C supports arguments -mavx2: YES 00:01:11.527 Compiler for C supports arguments -mavx: YES 00:01:11.527 Message: lib/net: Defining dependency "net" 00:01:11.527 Message: lib/meter: Defining dependency "meter" 00:01:11.527 Message: lib/ethdev: Defining dependency "ethdev" 00:01:11.527 Message: lib/pci: Defining dependency "pci" 00:01:11.527 Message: lib/cmdline: Defining dependency "cmdline" 00:01:11.527 Message: lib/hash: Defining dependency "hash" 00:01:11.527 Message: lib/timer: Defining dependency "timer" 00:01:11.527 Message: lib/compressdev: Defining dependency "compressdev" 00:01:11.527 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:11.527 Message: lib/dmadev: Defining dependency "dmadev" 00:01:11.527 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:11.527 Message: lib/power: Defining dependency "power" 00:01:11.527 Message: lib/reorder: Defining dependency "reorder" 00:01:11.527 Message: lib/security: Defining dependency "security" 00:01:11.527 Has header "linux/userfaultfd.h" : YES 00:01:11.527 Has header "linux/vduse.h" : YES 00:01:11.527 Message: lib/vhost: Defining dependency "vhost" 00:01:11.527 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:11.527 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:11.527 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:11.527 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:11.527 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:11.527 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:11.527 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:11.527 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:11.527 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:11.527 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:11.527 Program doxygen found: YES (/usr/bin/doxygen) 00:01:11.527 Configuring doxy-api-html.conf using configuration 00:01:11.527 Configuring doxy-api-man.conf using configuration 00:01:11.527 Program mandb found: YES (/usr/bin/mandb) 00:01:11.527 Program sphinx-build found: NO 00:01:11.527 Configuring rte_build_config.h using configuration 00:01:11.527 Message: 00:01:11.527 ================= 00:01:11.527 Applications Enabled 00:01:11.527 ================= 00:01:11.527 00:01:11.527 apps: 00:01:11.527 00:01:11.527 00:01:11.527 Message: 00:01:11.527 ================= 00:01:11.527 Libraries Enabled 00:01:11.527 ================= 00:01:11.527 00:01:11.527 libs: 00:01:11.527 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:11.527 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:11.527 cryptodev, dmadev, power, reorder, security, vhost, 00:01:11.527 00:01:11.527 Message: 00:01:11.527 =============== 00:01:11.527 Drivers Enabled 00:01:11.527 =============== 00:01:11.527 00:01:11.527 common: 00:01:11.527 00:01:11.527 bus: 00:01:11.527 pci, vdev, 00:01:11.527 mempool: 00:01:11.527 ring, 00:01:11.527 dma: 00:01:11.527 00:01:11.527 net: 00:01:11.527 00:01:11.527 crypto: 00:01:11.527 00:01:11.527 compress: 00:01:11.527 00:01:11.527 vdpa: 00:01:11.527 00:01:11.527 00:01:11.527 Message: 00:01:11.528 ================= 00:01:11.528 Content Skipped 00:01:11.528 ================= 00:01:11.528 00:01:11.528 apps: 00:01:11.528 dumpcap: explicitly disabled via build config 00:01:11.528 graph: explicitly disabled via build config 00:01:11.528 pdump: explicitly disabled via build config 00:01:11.528 proc-info: explicitly disabled via build config 00:01:11.528 test-acl: explicitly disabled via build config 00:01:11.528 test-bbdev: explicitly disabled via build config 00:01:11.528 test-cmdline: explicitly disabled via build config 00:01:11.528 test-compress-perf: explicitly disabled via build config 00:01:11.528 test-crypto-perf: explicitly disabled via build config 00:01:11.528 test-dma-perf: explicitly disabled via build config 00:01:11.528 test-eventdev: explicitly disabled via build config 00:01:11.528 test-fib: explicitly disabled via build config 00:01:11.528 test-flow-perf: explicitly disabled via build config 00:01:11.528 test-gpudev: explicitly disabled via build config 00:01:11.528 test-mldev: explicitly disabled via build config 00:01:11.528 test-pipeline: explicitly disabled via build config 00:01:11.528 test-pmd: explicitly disabled via build config 00:01:11.528 test-regex: explicitly disabled via build config 00:01:11.528 test-sad: explicitly disabled via build config 00:01:11.528 test-security-perf: explicitly disabled via build config 00:01:11.528 00:01:11.528 libs: 00:01:11.528 argparse: explicitly disabled via build config 00:01:11.528 metrics: explicitly disabled via build config 00:01:11.528 acl: explicitly disabled via build config 00:01:11.528 bbdev: explicitly disabled via build config 00:01:11.528 bitratestats: explicitly disabled via build config 00:01:11.528 bpf: explicitly disabled via build config 00:01:11.528 cfgfile: explicitly disabled via build config 00:01:11.528 distributor: explicitly disabled via build config 00:01:11.528 efd: explicitly disabled via build config 00:01:11.528 eventdev: explicitly disabled via build config 00:01:11.528 dispatcher: explicitly disabled via build config 00:01:11.528 gpudev: explicitly disabled via build config 00:01:11.528 gro: explicitly disabled via build config 00:01:11.528 gso: explicitly disabled via build config 00:01:11.528 ip_frag: explicitly disabled via build config 00:01:11.528 jobstats: explicitly disabled via build config 00:01:11.528 latencystats: explicitly disabled via build config 00:01:11.528 lpm: explicitly disabled via build config 00:01:11.528 member: explicitly disabled via build config 00:01:11.528 pcapng: explicitly disabled via build config 00:01:11.528 rawdev: explicitly disabled via build config 00:01:11.528 regexdev: explicitly disabled via build config 00:01:11.528 mldev: explicitly disabled via build config 00:01:11.528 rib: explicitly disabled via build config 00:01:11.528 sched: explicitly disabled via build config 00:01:11.528 stack: explicitly disabled via build config 00:01:11.528 ipsec: explicitly disabled via build config 00:01:11.528 pdcp: explicitly disabled via build config 00:01:11.528 fib: explicitly disabled via build config 00:01:11.528 port: explicitly disabled via build config 00:01:11.528 pdump: explicitly disabled via build config 00:01:11.528 table: explicitly disabled via build config 00:01:11.528 pipeline: explicitly disabled via build config 00:01:11.528 graph: explicitly disabled via build config 00:01:11.528 node: explicitly disabled via build config 00:01:11.528 00:01:11.528 drivers: 00:01:11.528 common/cpt: not in enabled drivers build config 00:01:11.528 common/dpaax: not in enabled drivers build config 00:01:11.528 common/iavf: not in enabled drivers build config 00:01:11.528 common/idpf: not in enabled drivers build config 00:01:11.528 common/ionic: not in enabled drivers build config 00:01:11.528 common/mvep: not in enabled drivers build config 00:01:11.528 common/octeontx: not in enabled drivers build config 00:01:11.528 bus/auxiliary: not in enabled drivers build config 00:01:11.528 bus/cdx: not in enabled drivers build config 00:01:11.528 bus/dpaa: not in enabled drivers build config 00:01:11.528 bus/fslmc: not in enabled drivers build config 00:01:11.528 bus/ifpga: not in enabled drivers build config 00:01:11.528 bus/platform: not in enabled drivers build config 00:01:11.528 bus/uacce: not in enabled drivers build config 00:01:11.528 bus/vmbus: not in enabled drivers build config 00:01:11.528 common/cnxk: not in enabled drivers build config 00:01:11.528 common/mlx5: not in enabled drivers build config 00:01:11.528 common/nfp: not in enabled drivers build config 00:01:11.528 common/nitrox: not in enabled drivers build config 00:01:11.528 common/qat: not in enabled drivers build config 00:01:11.528 common/sfc_efx: not in enabled drivers build config 00:01:11.528 mempool/bucket: not in enabled drivers build config 00:01:11.528 mempool/cnxk: not in enabled drivers build config 00:01:11.528 mempool/dpaa: not in enabled drivers build config 00:01:11.528 mempool/dpaa2: not in enabled drivers build config 00:01:11.528 mempool/octeontx: not in enabled drivers build config 00:01:11.528 mempool/stack: not in enabled drivers build config 00:01:11.528 dma/cnxk: not in enabled drivers build config 00:01:11.528 dma/dpaa: not in enabled drivers build config 00:01:11.528 dma/dpaa2: not in enabled drivers build config 00:01:11.528 dma/hisilicon: not in enabled drivers build config 00:01:11.528 dma/idxd: not in enabled drivers build config 00:01:11.528 dma/ioat: not in enabled drivers build config 00:01:11.528 dma/skeleton: not in enabled drivers build config 00:01:11.528 net/af_packet: not in enabled drivers build config 00:01:11.528 net/af_xdp: not in enabled drivers build config 00:01:11.528 net/ark: not in enabled drivers build config 00:01:11.528 net/atlantic: not in enabled drivers build config 00:01:11.528 net/avp: not in enabled drivers build config 00:01:11.528 net/axgbe: not in enabled drivers build config 00:01:11.528 net/bnx2x: not in enabled drivers build config 00:01:11.528 net/bnxt: not in enabled drivers build config 00:01:11.528 net/bonding: not in enabled drivers build config 00:01:11.528 net/cnxk: not in enabled drivers build config 00:01:11.528 net/cpfl: not in enabled drivers build config 00:01:11.528 net/cxgbe: not in enabled drivers build config 00:01:11.528 net/dpaa: not in enabled drivers build config 00:01:11.528 net/dpaa2: not in enabled drivers build config 00:01:11.528 net/e1000: not in enabled drivers build config 00:01:11.528 net/ena: not in enabled drivers build config 00:01:11.528 net/enetc: not in enabled drivers build config 00:01:11.528 net/enetfec: not in enabled drivers build config 00:01:11.528 net/enic: not in enabled drivers build config 00:01:11.528 net/failsafe: not in enabled drivers build config 00:01:11.528 net/fm10k: not in enabled drivers build config 00:01:11.528 net/gve: not in enabled drivers build config 00:01:11.528 net/hinic: not in enabled drivers build config 00:01:11.528 net/hns3: not in enabled drivers build config 00:01:11.528 net/i40e: not in enabled drivers build config 00:01:11.528 net/iavf: not in enabled drivers build config 00:01:11.528 net/ice: not in enabled drivers build config 00:01:11.528 net/idpf: not in enabled drivers build config 00:01:11.528 net/igc: not in enabled drivers build config 00:01:11.528 net/ionic: not in enabled drivers build config 00:01:11.528 net/ipn3ke: not in enabled drivers build config 00:01:11.528 net/ixgbe: not in enabled drivers build config 00:01:11.528 net/mana: not in enabled drivers build config 00:01:11.528 net/memif: not in enabled drivers build config 00:01:11.528 net/mlx4: not in enabled drivers build config 00:01:11.528 net/mlx5: not in enabled drivers build config 00:01:11.528 net/mvneta: not in enabled drivers build config 00:01:11.528 net/mvpp2: not in enabled drivers build config 00:01:11.528 net/netvsc: not in enabled drivers build config 00:01:11.528 net/nfb: not in enabled drivers build config 00:01:11.528 net/nfp: not in enabled drivers build config 00:01:11.528 net/ngbe: not in enabled drivers build config 00:01:11.528 net/null: not in enabled drivers build config 00:01:11.528 net/octeontx: not in enabled drivers build config 00:01:11.528 net/octeon_ep: not in enabled drivers build config 00:01:11.528 net/pcap: not in enabled drivers build config 00:01:11.528 net/pfe: not in enabled drivers build config 00:01:11.528 net/qede: not in enabled drivers build config 00:01:11.528 net/ring: not in enabled drivers build config 00:01:11.528 net/sfc: not in enabled drivers build config 00:01:11.528 net/softnic: not in enabled drivers build config 00:01:11.528 net/tap: not in enabled drivers build config 00:01:11.528 net/thunderx: not in enabled drivers build config 00:01:11.528 net/txgbe: not in enabled drivers build config 00:01:11.528 net/vdev_netvsc: not in enabled drivers build config 00:01:11.528 net/vhost: not in enabled drivers build config 00:01:11.528 net/virtio: not in enabled drivers build config 00:01:11.528 net/vmxnet3: not in enabled drivers build config 00:01:11.528 raw/*: missing internal dependency, "rawdev" 00:01:11.528 crypto/armv8: not in enabled drivers build config 00:01:11.528 crypto/bcmfs: not in enabled drivers build config 00:01:11.528 crypto/caam_jr: not in enabled drivers build config 00:01:11.528 crypto/ccp: not in enabled drivers build config 00:01:11.528 crypto/cnxk: not in enabled drivers build config 00:01:11.528 crypto/dpaa_sec: not in enabled drivers build config 00:01:11.528 crypto/dpaa2_sec: not in enabled drivers build config 00:01:11.528 crypto/ipsec_mb: not in enabled drivers build config 00:01:11.528 crypto/mlx5: not in enabled drivers build config 00:01:11.528 crypto/mvsam: not in enabled drivers build config 00:01:11.528 crypto/nitrox: not in enabled drivers build config 00:01:11.528 crypto/null: not in enabled drivers build config 00:01:11.528 crypto/octeontx: not in enabled drivers build config 00:01:11.528 crypto/openssl: not in enabled drivers build config 00:01:11.528 crypto/scheduler: not in enabled drivers build config 00:01:11.528 crypto/uadk: not in enabled drivers build config 00:01:11.528 crypto/virtio: not in enabled drivers build config 00:01:11.528 compress/isal: not in enabled drivers build config 00:01:11.528 compress/mlx5: not in enabled drivers build config 00:01:11.528 compress/nitrox: not in enabled drivers build config 00:01:11.528 compress/octeontx: not in enabled drivers build config 00:01:11.528 compress/zlib: not in enabled drivers build config 00:01:11.528 regex/*: missing internal dependency, "regexdev" 00:01:11.528 ml/*: missing internal dependency, "mldev" 00:01:11.528 vdpa/ifc: not in enabled drivers build config 00:01:11.528 vdpa/mlx5: not in enabled drivers build config 00:01:11.528 vdpa/nfp: not in enabled drivers build config 00:01:11.528 vdpa/sfc: not in enabled drivers build config 00:01:11.529 event/*: missing internal dependency, "eventdev" 00:01:11.529 baseband/*: missing internal dependency, "bbdev" 00:01:11.529 gpu/*: missing internal dependency, "gpudev" 00:01:11.529 00:01:11.529 00:01:11.529 Build targets in project: 84 00:01:11.529 00:01:11.529 DPDK 24.03.0 00:01:11.529 00:01:11.529 User defined options 00:01:11.529 buildtype : debug 00:01:11.529 default_library : shared 00:01:11.529 libdir : lib 00:01:11.529 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.529 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:11.529 c_link_args : 00:01:11.529 cpu_instruction_set: native 00:01:11.529 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:11.529 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:11.529 enable_docs : false 00:01:11.529 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:11.529 enable_kmods : false 00:01:11.529 max_lcores : 128 00:01:11.529 tests : false 00:01:11.529 00:01:11.529 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.529 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.529 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.529 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.529 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.529 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.529 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.529 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.529 [7/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.529 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.529 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:11.529 [10/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:11.529 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.529 [12/267] Linking static target lib/librte_log.a 00:01:11.529 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:11.529 [14/267] Linking static target lib/librte_kvargs.a 00:01:11.529 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:11.529 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.529 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.529 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:11.529 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.529 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:11.529 [21/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:11.529 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:11.529 [23/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.529 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:11.529 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:11.529 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:11.529 [27/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.529 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:11.529 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:11.529 [30/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.529 [31/267] Linking static target lib/librte_pci.a 00:01:11.790 [32/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:11.790 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:11.790 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:11.790 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:11.790 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.790 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.790 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.790 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:11.790 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:11.790 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:11.790 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:11.790 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.790 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.790 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:11.790 [46/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:11.790 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:11.790 [48/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:11.790 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:11.790 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:11.790 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.049 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.049 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.049 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.049 [55/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:12.049 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.049 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.049 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.049 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.049 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.049 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.049 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:12.049 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.049 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.049 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.049 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.049 [67/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.049 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.049 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.049 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.049 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.049 [72/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.049 [73/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:12.050 [74/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:12.050 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.050 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:12.050 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.050 [78/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:12.050 [79/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.050 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:12.050 [81/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:12.050 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.050 [83/267] Linking static target lib/librte_ring.a 00:01:12.050 [84/267] Linking static target lib/librte_meter.a 00:01:12.050 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.050 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.050 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:12.050 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.050 [89/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.050 [90/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.050 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.050 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.050 [93/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:12.050 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:12.050 [95/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.050 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:12.050 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:12.050 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:12.050 [99/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.050 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.050 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.050 [102/267] Linking static target lib/librte_cmdline.a 00:01:12.050 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:12.050 [104/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.050 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:12.050 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:12.050 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:12.050 [108/267] Linking static target lib/librte_dmadev.a 00:01:12.050 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.050 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:12.050 [111/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.050 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:12.050 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.050 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.050 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:12.050 [116/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:12.050 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:12.050 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:12.050 [119/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.050 [120/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:12.050 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.050 [122/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.050 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:12.050 [124/267] Linking static target lib/librte_telemetry.a 00:01:12.050 [125/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.050 [126/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.050 [127/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.050 [128/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.050 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.050 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.050 [131/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.050 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.050 [133/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:12.050 [134/267] Linking static target lib/librte_timer.a 00:01:12.050 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.050 [136/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.050 [137/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:12.050 [138/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.050 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:12.050 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:12.050 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:12.050 [142/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.050 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:12.050 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.050 [145/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.050 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.050 [147/267] Linking static target lib/librte_compressdev.a 00:01:12.050 [148/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.050 [149/267] Linking static target lib/librte_reorder.a 00:01:12.050 [150/267] Linking target lib/librte_log.so.24.1 00:01:12.050 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.050 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.050 [153/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.050 [154/267] Linking static target lib/librte_mbuf.a 00:01:12.050 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:12.050 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.050 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.050 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:12.050 [159/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:12.050 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:12.050 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.050 [162/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.050 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:12.050 [164/267] Linking static target lib/librte_net.a 00:01:12.311 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.311 [166/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.311 [167/267] Linking static target lib/librte_eal.a 00:01:12.311 [168/267] Linking static target lib/librte_mempool.a 00:01:12.311 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.311 [170/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.311 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.311 [172/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:12.311 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.311 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.311 [175/267] Linking static target lib/librte_security.a 00:01:12.311 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.311 [177/267] Linking static target lib/librte_rcu.a 00:01:12.311 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:12.311 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.311 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.311 [181/267] Linking static target lib/librte_power.a 00:01:12.311 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.311 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.311 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:12.311 [185/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.311 [186/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.311 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.311 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.311 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.311 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.311 [191/267] Linking static target drivers/librte_bus_vdev.a 00:01:12.311 [192/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.311 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.311 [194/267] Linking target lib/librte_kvargs.so.24.1 00:01:12.311 [195/267] Linking static target drivers/librte_bus_pci.a 00:01:12.311 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.311 [197/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.311 [198/267] Linking static target lib/librte_hash.a 00:01:12.572 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:12.572 [200/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:12.572 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.572 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.572 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.572 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:12.572 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.572 [206/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.572 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.572 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.572 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.572 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.572 [211/267] Linking static target lib/librte_cryptodev.a 00:01:12.833 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.833 [213/267] Linking target lib/librte_telemetry.so.24.1 00:01:12.833 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.833 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:12.833 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.833 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.833 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.093 [219/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.093 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:13.093 [221/267] Linking static target lib/librte_ethdev.a 00:01:13.093 [222/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.354 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.925 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:13.925 [228/267] Linking static target lib/librte_vhost.a 00:01:14.867 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.251 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.865 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.809 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.809 [233/267] Linking target lib/librte_eal.so.24.1 00:01:24.069 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:24.069 [235/267] Linking target lib/librte_meter.so.24.1 00:01:24.069 [236/267] Linking target lib/librte_dmadev.so.24.1 00:01:24.069 [237/267] Linking target lib/librte_ring.so.24.1 00:01:24.069 [238/267] Linking target lib/librte_pci.so.24.1 00:01:24.069 [239/267] Linking target lib/librte_timer.so.24.1 00:01:24.069 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:24.069 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:24.069 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:24.069 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:24.069 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:24.069 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:24.329 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:24.329 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:24.329 [248/267] Linking target lib/librte_rcu.so.24.1 00:01:24.329 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:24.329 [250/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:24.329 [251/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:24.329 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:24.589 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:24.589 [254/267] Linking target lib/librte_net.so.24.1 00:01:24.589 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:24.589 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:24.589 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:24.589 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:24.589 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:24.849 [260/267] Linking target lib/librte_security.so.24.1 00:01:24.849 [261/267] Linking target lib/librte_hash.so.24.1 00:01:24.849 [262/267] Linking target lib/librte_cmdline.so.24.1 00:01:24.849 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:24.849 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:24.849 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:25.110 [266/267] Linking target lib/librte_power.so.24.1 00:01:25.110 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:25.110 INFO: autodetecting backend as ninja 00:01:25.110 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:26.054 CC lib/ut_mock/mock.o 00:01:26.054 CC lib/log/log.o 00:01:26.054 CC lib/ut/ut.o 00:01:26.054 CC lib/log/log_flags.o 00:01:26.054 CC lib/log/log_deprecated.o 00:01:26.315 LIB libspdk_ut.a 00:01:26.315 LIB libspdk_ut_mock.a 00:01:26.315 LIB libspdk_log.a 00:01:26.315 SO libspdk_ut.so.2.0 00:01:26.315 SO libspdk_ut_mock.so.6.0 00:01:26.315 SO libspdk_log.so.7.0 00:01:26.315 SYMLINK libspdk_ut.so 00:01:26.315 SYMLINK libspdk_ut_mock.so 00:01:26.315 SYMLINK libspdk_log.so 00:01:26.888 CC lib/util/base64.o 00:01:26.888 CC lib/util/bit_array.o 00:01:26.888 CC lib/util/cpuset.o 00:01:26.888 CC lib/util/crc16.o 00:01:26.888 CC lib/util/crc32.o 00:01:26.888 CC lib/util/crc32c.o 00:01:26.888 CC lib/util/crc32_ieee.o 00:01:26.888 CC lib/util/crc64.o 00:01:26.888 CC lib/util/dif.o 00:01:26.888 CXX lib/trace_parser/trace.o 00:01:26.888 CC lib/dma/dma.o 00:01:26.888 CC lib/util/fd.o 00:01:26.888 CC lib/util/fd_group.o 00:01:26.888 CC lib/ioat/ioat.o 00:01:26.888 CC lib/util/file.o 00:01:26.888 CC lib/util/hexlify.o 00:01:26.888 CC lib/util/iov.o 00:01:26.888 CC lib/util/math.o 00:01:26.888 CC lib/util/net.o 00:01:26.888 CC lib/util/pipe.o 00:01:26.888 CC lib/util/strerror_tls.o 00:01:26.888 CC lib/util/string.o 00:01:26.888 CC lib/util/uuid.o 00:01:26.888 CC lib/util/xor.o 00:01:26.888 CC lib/util/zipf.o 00:01:26.888 CC lib/vfio_user/host/vfio_user_pci.o 00:01:26.888 CC lib/vfio_user/host/vfio_user.o 00:01:26.888 LIB libspdk_dma.a 00:01:27.150 SO libspdk_dma.so.4.0 00:01:27.150 LIB libspdk_ioat.a 00:01:27.150 SO libspdk_ioat.so.7.0 00:01:27.150 SYMLINK libspdk_dma.so 00:01:27.150 SYMLINK libspdk_ioat.so 00:01:27.150 LIB libspdk_vfio_user.a 00:01:27.150 SO libspdk_vfio_user.so.5.0 00:01:27.150 LIB libspdk_util.a 00:01:27.411 SYMLINK libspdk_vfio_user.so 00:01:27.411 SO libspdk_util.so.10.0 00:01:27.411 SYMLINK libspdk_util.so 00:01:27.672 LIB libspdk_trace_parser.a 00:01:27.672 SO libspdk_trace_parser.so.5.0 00:01:27.672 SYMLINK libspdk_trace_parser.so 00:01:27.932 CC lib/vmd/vmd.o 00:01:27.932 CC lib/vmd/led.o 00:01:27.932 CC lib/rdma_provider/common.o 00:01:27.932 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:27.932 CC lib/conf/conf.o 00:01:27.932 CC lib/json/json_parse.o 00:01:27.932 CC lib/env_dpdk/env.o 00:01:27.932 CC lib/json/json_util.o 00:01:27.932 CC lib/env_dpdk/memory.o 00:01:27.932 CC lib/rdma_utils/rdma_utils.o 00:01:27.932 CC lib/json/json_write.o 00:01:27.932 CC lib/env_dpdk/pci.o 00:01:27.932 CC lib/env_dpdk/init.o 00:01:27.932 CC lib/env_dpdk/threads.o 00:01:27.932 CC lib/idxd/idxd.o 00:01:27.932 CC lib/idxd/idxd_kernel.o 00:01:27.932 CC lib/env_dpdk/pci_ioat.o 00:01:27.932 CC lib/idxd/idxd_user.o 00:01:27.932 CC lib/env_dpdk/pci_virtio.o 00:01:27.932 CC lib/env_dpdk/pci_vmd.o 00:01:27.932 CC lib/env_dpdk/pci_idxd.o 00:01:27.932 CC lib/env_dpdk/pci_event.o 00:01:27.932 CC lib/env_dpdk/sigbus_handler.o 00:01:27.932 CC lib/env_dpdk/pci_dpdk.o 00:01:27.932 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:27.932 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:28.194 LIB libspdk_rdma_provider.a 00:01:28.194 SO libspdk_rdma_provider.so.6.0 00:01:28.194 LIB libspdk_conf.a 00:01:28.194 SO libspdk_conf.so.6.0 00:01:28.194 SYMLINK libspdk_rdma_provider.so 00:01:28.194 LIB libspdk_rdma_utils.a 00:01:28.194 LIB libspdk_json.a 00:01:28.194 SO libspdk_rdma_utils.so.1.0 00:01:28.194 SYMLINK libspdk_conf.so 00:01:28.194 SO libspdk_json.so.6.0 00:01:28.194 SYMLINK libspdk_rdma_utils.so 00:01:28.455 SYMLINK libspdk_json.so 00:01:28.455 LIB libspdk_idxd.a 00:01:28.455 SO libspdk_idxd.so.12.0 00:01:28.455 LIB libspdk_vmd.a 00:01:28.455 SO libspdk_vmd.so.6.0 00:01:28.455 SYMLINK libspdk_idxd.so 00:01:28.455 SYMLINK libspdk_vmd.so 00:01:28.716 CC lib/jsonrpc/jsonrpc_server.o 00:01:28.716 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:28.717 CC lib/jsonrpc/jsonrpc_client.o 00:01:28.717 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:28.978 LIB libspdk_jsonrpc.a 00:01:28.978 SO libspdk_jsonrpc.so.6.0 00:01:28.978 LIB libspdk_env_dpdk.a 00:01:28.978 SYMLINK libspdk_jsonrpc.so 00:01:29.239 SO libspdk_env_dpdk.so.15.0 00:01:29.239 SYMLINK libspdk_env_dpdk.so 00:01:29.500 CC lib/rpc/rpc.o 00:01:29.500 LIB libspdk_rpc.a 00:01:29.760 SO libspdk_rpc.so.6.0 00:01:29.760 SYMLINK libspdk_rpc.so 00:01:30.021 CC lib/trace/trace.o 00:01:30.021 CC lib/notify/notify.o 00:01:30.021 CC lib/notify/notify_rpc.o 00:01:30.022 CC lib/trace/trace_flags.o 00:01:30.022 CC lib/trace/trace_rpc.o 00:01:30.022 CC lib/keyring/keyring.o 00:01:30.022 CC lib/keyring/keyring_rpc.o 00:01:30.283 LIB libspdk_notify.a 00:01:30.283 SO libspdk_notify.so.6.0 00:01:30.283 LIB libspdk_keyring.a 00:01:30.283 LIB libspdk_trace.a 00:01:30.283 SO libspdk_keyring.so.1.0 00:01:30.283 SYMLINK libspdk_notify.so 00:01:30.283 SO libspdk_trace.so.10.0 00:01:30.544 SYMLINK libspdk_keyring.so 00:01:30.544 SYMLINK libspdk_trace.so 00:01:30.805 CC lib/thread/thread.o 00:01:30.805 CC lib/thread/iobuf.o 00:01:30.805 CC lib/sock/sock.o 00:01:30.805 CC lib/sock/sock_rpc.o 00:01:31.378 LIB libspdk_sock.a 00:01:31.378 SO libspdk_sock.so.10.0 00:01:31.378 SYMLINK libspdk_sock.so 00:01:31.639 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:31.639 CC lib/nvme/nvme_ctrlr.o 00:01:31.639 CC lib/nvme/nvme_fabric.o 00:01:31.639 CC lib/nvme/nvme_ns_cmd.o 00:01:31.639 CC lib/nvme/nvme_ns.o 00:01:31.639 CC lib/nvme/nvme_pcie_common.o 00:01:31.639 CC lib/nvme/nvme_pcie.o 00:01:31.639 CC lib/nvme/nvme_qpair.o 00:01:31.639 CC lib/nvme/nvme.o 00:01:31.639 CC lib/nvme/nvme_quirks.o 00:01:31.639 CC lib/nvme/nvme_transport.o 00:01:31.639 CC lib/nvme/nvme_discovery.o 00:01:31.639 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:31.639 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:31.639 CC lib/nvme/nvme_tcp.o 00:01:31.639 CC lib/nvme/nvme_opal.o 00:01:31.639 CC lib/nvme/nvme_io_msg.o 00:01:31.639 CC lib/nvme/nvme_poll_group.o 00:01:31.639 CC lib/nvme/nvme_zns.o 00:01:31.639 CC lib/nvme/nvme_stubs.o 00:01:31.639 CC lib/nvme/nvme_auth.o 00:01:31.639 CC lib/nvme/nvme_cuse.o 00:01:31.639 CC lib/nvme/nvme_vfio_user.o 00:01:31.639 CC lib/nvme/nvme_rdma.o 00:01:32.212 LIB libspdk_thread.a 00:01:32.212 SO libspdk_thread.so.10.1 00:01:32.212 SYMLINK libspdk_thread.so 00:01:32.473 CC lib/init/json_config.o 00:01:32.473 CC lib/init/subsystem.o 00:01:32.473 CC lib/init/rpc.o 00:01:32.473 CC lib/init/subsystem_rpc.o 00:01:32.473 CC lib/vfu_tgt/tgt_endpoint.o 00:01:32.473 CC lib/vfu_tgt/tgt_rpc.o 00:01:32.473 CC lib/blob/blobstore.o 00:01:32.473 CC lib/virtio/virtio.o 00:01:32.473 CC lib/blob/blob_bs_dev.o 00:01:32.473 CC lib/blob/request.o 00:01:32.473 CC lib/virtio/virtio_vhost_user.o 00:01:32.473 CC lib/blob/zeroes.o 00:01:32.473 CC lib/virtio/virtio_vfio_user.o 00:01:32.473 CC lib/virtio/virtio_pci.o 00:01:32.473 CC lib/accel/accel.o 00:01:32.473 CC lib/accel/accel_rpc.o 00:01:32.473 CC lib/accel/accel_sw.o 00:01:32.734 LIB libspdk_init.a 00:01:32.734 SO libspdk_init.so.5.0 00:01:32.734 LIB libspdk_vfu_tgt.a 00:01:32.734 LIB libspdk_virtio.a 00:01:32.734 SO libspdk_vfu_tgt.so.3.0 00:01:32.734 SYMLINK libspdk_init.so 00:01:32.995 SO libspdk_virtio.so.7.0 00:01:32.995 SYMLINK libspdk_vfu_tgt.so 00:01:32.995 SYMLINK libspdk_virtio.so 00:01:32.995 LIB libspdk_nvme.a 00:01:32.995 SO libspdk_nvme.so.13.1 00:01:33.256 CC lib/event/app.o 00:01:33.256 CC lib/event/reactor.o 00:01:33.256 CC lib/event/log_rpc.o 00:01:33.256 CC lib/event/app_rpc.o 00:01:33.256 CC lib/event/scheduler_static.o 00:01:33.256 LIB libspdk_accel.a 00:01:33.516 SYMLINK libspdk_nvme.so 00:01:33.516 SO libspdk_accel.so.16.0 00:01:33.516 SYMLINK libspdk_accel.so 00:01:33.516 LIB libspdk_event.a 00:01:33.516 SO libspdk_event.so.14.0 00:01:33.777 SYMLINK libspdk_event.so 00:01:33.777 CC lib/bdev/bdev.o 00:01:33.777 CC lib/bdev/bdev_rpc.o 00:01:33.777 CC lib/bdev/bdev_zone.o 00:01:33.777 CC lib/bdev/part.o 00:01:33.777 CC lib/bdev/scsi_nvme.o 00:01:35.165 LIB libspdk_blob.a 00:01:35.165 SO libspdk_blob.so.11.0 00:01:35.165 SYMLINK libspdk_blob.so 00:01:35.425 CC lib/lvol/lvol.o 00:01:35.425 CC lib/blobfs/blobfs.o 00:01:35.425 CC lib/blobfs/tree.o 00:01:35.996 LIB libspdk_bdev.a 00:01:35.996 SO libspdk_bdev.so.16.0 00:01:36.257 SYMLINK libspdk_bdev.so 00:01:36.257 LIB libspdk_blobfs.a 00:01:36.257 SO libspdk_blobfs.so.10.0 00:01:36.257 LIB libspdk_lvol.a 00:01:36.257 SO libspdk_lvol.so.10.0 00:01:36.257 SYMLINK libspdk_blobfs.so 00:01:36.516 SYMLINK libspdk_lvol.so 00:01:36.516 CC lib/scsi/dev.o 00:01:36.516 CC lib/scsi/lun.o 00:01:36.516 CC lib/scsi/port.o 00:01:36.516 CC lib/scsi/scsi.o 00:01:36.516 CC lib/scsi/scsi_bdev.o 00:01:36.516 CC lib/scsi/scsi_pr.o 00:01:36.516 CC lib/scsi/scsi_rpc.o 00:01:36.516 CC lib/scsi/task.o 00:01:36.516 CC lib/nvmf/ctrlr.o 00:01:36.516 CC lib/ftl/ftl_core.o 00:01:36.516 CC lib/nvmf/ctrlr_discovery.o 00:01:36.516 CC lib/nvmf/ctrlr_bdev.o 00:01:36.516 CC lib/ftl/ftl_init.o 00:01:36.516 CC lib/nvmf/subsystem.o 00:01:36.516 CC lib/ftl/ftl_layout.o 00:01:36.516 CC lib/nvmf/nvmf.o 00:01:36.516 CC lib/ftl/ftl_debug.o 00:01:36.516 CC lib/ublk/ublk_rpc.o 00:01:36.516 CC lib/nvmf/nvmf_rpc.o 00:01:36.516 CC lib/ftl/ftl_io.o 00:01:36.516 CC lib/ublk/ublk.o 00:01:36.516 CC lib/ftl/ftl_sb.o 00:01:36.516 CC lib/nbd/nbd.o 00:01:36.516 CC lib/nvmf/transport.o 00:01:36.516 CC lib/ftl/ftl_l2p.o 00:01:36.516 CC lib/nvmf/tcp.o 00:01:36.516 CC lib/nbd/nbd_rpc.o 00:01:36.516 CC lib/nvmf/stubs.o 00:01:36.516 CC lib/ftl/ftl_l2p_flat.o 00:01:36.516 CC lib/nvmf/mdns_server.o 00:01:36.516 CC lib/ftl/ftl_nv_cache.o 00:01:36.516 CC lib/nvmf/vfio_user.o 00:01:36.516 CC lib/ftl/ftl_band.o 00:01:36.516 CC lib/ftl/ftl_band_ops.o 00:01:36.516 CC lib/ftl/ftl_rq.o 00:01:36.516 CC lib/nvmf/rdma.o 00:01:36.516 CC lib/nvmf/auth.o 00:01:36.516 CC lib/ftl/ftl_writer.o 00:01:36.516 CC lib/ftl/ftl_reloc.o 00:01:36.516 CC lib/ftl/ftl_l2p_cache.o 00:01:36.516 CC lib/ftl/ftl_p2l.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:36.516 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:36.516 CC lib/ftl/utils/ftl_conf.o 00:01:36.516 CC lib/ftl/utils/ftl_md.o 00:01:36.516 CC lib/ftl/utils/ftl_bitmap.o 00:01:36.516 CC lib/ftl/utils/ftl_mempool.o 00:01:36.516 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:36.516 CC lib/ftl/utils/ftl_property.o 00:01:36.516 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:36.516 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:36.516 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:36.516 CC lib/ftl/base/ftl_base_dev.o 00:01:36.516 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:36.516 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:36.516 CC lib/ftl/base/ftl_base_bdev.o 00:01:36.516 CC lib/ftl/ftl_trace.o 00:01:37.086 LIB libspdk_nbd.a 00:01:37.086 SO libspdk_nbd.so.7.0 00:01:37.086 LIB libspdk_scsi.a 00:01:37.086 SO libspdk_scsi.so.9.0 00:01:37.086 SYMLINK libspdk_nbd.so 00:01:37.086 LIB libspdk_ublk.a 00:01:37.346 SYMLINK libspdk_scsi.so 00:01:37.346 SO libspdk_ublk.so.3.0 00:01:37.346 SYMLINK libspdk_ublk.so 00:01:37.606 LIB libspdk_ftl.a 00:01:37.606 CC lib/vhost/vhost.o 00:01:37.606 CC lib/vhost/vhost_scsi.o 00:01:37.606 CC lib/vhost/vhost_rpc.o 00:01:37.606 CC lib/vhost/rte_vhost_user.o 00:01:37.606 CC lib/iscsi/conn.o 00:01:37.606 CC lib/vhost/vhost_blk.o 00:01:37.606 CC lib/iscsi/init_grp.o 00:01:37.606 CC lib/iscsi/iscsi.o 00:01:37.606 CC lib/iscsi/md5.o 00:01:37.606 CC lib/iscsi/param.o 00:01:37.606 CC lib/iscsi/portal_grp.o 00:01:37.606 CC lib/iscsi/tgt_node.o 00:01:37.606 CC lib/iscsi/iscsi_subsystem.o 00:01:37.606 CC lib/iscsi/iscsi_rpc.o 00:01:37.606 CC lib/iscsi/task.o 00:01:37.606 SO libspdk_ftl.so.9.0 00:01:38.179 SYMLINK libspdk_ftl.so 00:01:38.440 LIB libspdk_nvmf.a 00:01:38.440 SO libspdk_nvmf.so.19.0 00:01:38.440 LIB libspdk_vhost.a 00:01:38.701 SO libspdk_vhost.so.8.0 00:01:38.701 SYMLINK libspdk_nvmf.so 00:01:38.701 SYMLINK libspdk_vhost.so 00:01:38.701 LIB libspdk_iscsi.a 00:01:38.701 SO libspdk_iscsi.so.8.0 00:01:38.987 SYMLINK libspdk_iscsi.so 00:01:39.603 CC module/vfu_device/vfu_virtio.o 00:01:39.603 CC module/vfu_device/vfu_virtio_blk.o 00:01:39.603 CC module/env_dpdk/env_dpdk_rpc.o 00:01:39.603 CC module/vfu_device/vfu_virtio_scsi.o 00:01:39.603 CC module/vfu_device/vfu_virtio_rpc.o 00:01:39.603 LIB libspdk_env_dpdk_rpc.a 00:01:39.603 CC module/keyring/file/keyring.o 00:01:39.603 CC module/keyring/file/keyring_rpc.o 00:01:39.603 CC module/scheduler/gscheduler/gscheduler.o 00:01:39.603 CC module/keyring/linux/keyring.o 00:01:39.603 CC module/sock/posix/posix.o 00:01:39.603 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:39.603 CC module/keyring/linux/keyring_rpc.o 00:01:39.603 CC module/blob/bdev/blob_bdev.o 00:01:39.603 CC module/accel/ioat/accel_ioat.o 00:01:39.603 CC module/accel/ioat/accel_ioat_rpc.o 00:01:39.603 CC module/accel/dsa/accel_dsa.o 00:01:39.603 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:39.603 CC module/accel/dsa/accel_dsa_rpc.o 00:01:39.864 CC module/accel/iaa/accel_iaa.o 00:01:39.864 CC module/accel/iaa/accel_iaa_rpc.o 00:01:39.864 CC module/accel/error/accel_error.o 00:01:39.864 CC module/accel/error/accel_error_rpc.o 00:01:39.864 SO libspdk_env_dpdk_rpc.so.6.0 00:01:39.864 SYMLINK libspdk_env_dpdk_rpc.so 00:01:39.864 LIB libspdk_keyring_linux.a 00:01:39.864 LIB libspdk_scheduler_gscheduler.a 00:01:39.864 LIB libspdk_keyring_file.a 00:01:39.864 LIB libspdk_scheduler_dpdk_governor.a 00:01:39.864 SO libspdk_keyring_linux.so.1.0 00:01:39.864 LIB libspdk_accel_ioat.a 00:01:39.864 SO libspdk_keyring_file.so.1.0 00:01:39.864 SO libspdk_scheduler_gscheduler.so.4.0 00:01:39.864 LIB libspdk_scheduler_dynamic.a 00:01:39.864 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:39.864 LIB libspdk_accel_iaa.a 00:01:39.864 LIB libspdk_accel_error.a 00:01:39.864 SO libspdk_accel_ioat.so.6.0 00:01:39.864 SO libspdk_scheduler_dynamic.so.4.0 00:01:39.864 SYMLINK libspdk_keyring_linux.so 00:01:39.864 LIB libspdk_accel_dsa.a 00:01:39.864 LIB libspdk_blob_bdev.a 00:01:39.864 SO libspdk_accel_iaa.so.3.0 00:01:39.864 SYMLINK libspdk_keyring_file.so 00:01:39.864 SYMLINK libspdk_scheduler_gscheduler.so 00:01:39.864 SO libspdk_accel_error.so.2.0 00:01:40.125 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:40.125 SO libspdk_blob_bdev.so.11.0 00:01:40.125 SO libspdk_accel_dsa.so.5.0 00:01:40.125 SYMLINK libspdk_accel_ioat.so 00:01:40.126 SYMLINK libspdk_scheduler_dynamic.so 00:01:40.126 SYMLINK libspdk_accel_error.so 00:01:40.126 SYMLINK libspdk_accel_iaa.so 00:01:40.126 SYMLINK libspdk_blob_bdev.so 00:01:40.126 LIB libspdk_vfu_device.a 00:01:40.126 SYMLINK libspdk_accel_dsa.so 00:01:40.126 SO libspdk_vfu_device.so.3.0 00:01:40.126 SYMLINK libspdk_vfu_device.so 00:01:40.387 LIB libspdk_sock_posix.a 00:01:40.387 SO libspdk_sock_posix.so.6.0 00:01:40.648 SYMLINK libspdk_sock_posix.so 00:01:40.648 CC module/bdev/delay/vbdev_delay.o 00:01:40.648 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:40.648 CC module/bdev/error/vbdev_error.o 00:01:40.648 CC module/bdev/error/vbdev_error_rpc.o 00:01:40.648 CC module/bdev/malloc/bdev_malloc.o 00:01:40.648 CC module/blobfs/bdev/blobfs_bdev.o 00:01:40.648 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:40.648 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:40.648 CC module/bdev/gpt/gpt.o 00:01:40.648 CC module/bdev/nvme/bdev_nvme.o 00:01:40.648 CC module/bdev/iscsi/bdev_iscsi.o 00:01:40.648 CC module/bdev/gpt/vbdev_gpt.o 00:01:40.648 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:40.648 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:40.648 CC module/bdev/nvme/nvme_rpc.o 00:01:40.648 CC module/bdev/raid/bdev_raid.o 00:01:40.648 CC module/bdev/lvol/vbdev_lvol.o 00:01:40.648 CC module/bdev/nvme/bdev_mdns_client.o 00:01:40.648 CC module/bdev/raid/bdev_raid_rpc.o 00:01:40.648 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:40.648 CC module/bdev/raid/bdev_raid_sb.o 00:01:40.648 CC module/bdev/nvme/vbdev_opal.o 00:01:40.648 CC module/bdev/passthru/vbdev_passthru.o 00:01:40.648 CC module/bdev/raid/raid0.o 00:01:40.648 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:40.648 CC module/bdev/raid/raid1.o 00:01:40.648 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:40.648 CC module/bdev/raid/concat.o 00:01:40.648 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:40.648 CC module/bdev/null/bdev_null_rpc.o 00:01:40.648 CC module/bdev/null/bdev_null.o 00:01:40.648 CC module/bdev/split/vbdev_split.o 00:01:40.648 CC module/bdev/split/vbdev_split_rpc.o 00:01:40.648 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:40.648 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:40.648 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:40.648 CC module/bdev/aio/bdev_aio.o 00:01:40.648 CC module/bdev/ftl/bdev_ftl.o 00:01:40.648 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:40.648 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:40.648 CC module/bdev/aio/bdev_aio_rpc.o 00:01:40.648 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:40.908 LIB libspdk_blobfs_bdev.a 00:01:40.908 SO libspdk_blobfs_bdev.so.6.0 00:01:40.908 LIB libspdk_bdev_error.a 00:01:40.908 SYMLINK libspdk_blobfs_bdev.so 00:01:40.908 LIB libspdk_bdev_split.a 00:01:40.908 LIB libspdk_bdev_gpt.a 00:01:40.908 LIB libspdk_bdev_zone_block.a 00:01:40.908 SO libspdk_bdev_error.so.6.0 00:01:40.908 LIB libspdk_bdev_null.a 00:01:40.908 SO libspdk_bdev_zone_block.so.6.0 00:01:40.908 SO libspdk_bdev_split.so.6.0 00:01:40.908 SO libspdk_bdev_gpt.so.6.0 00:01:40.908 LIB libspdk_bdev_passthru.a 00:01:40.908 LIB libspdk_bdev_delay.a 00:01:40.908 SO libspdk_bdev_null.so.6.0 00:01:40.908 LIB libspdk_bdev_malloc.a 00:01:40.908 LIB libspdk_bdev_ftl.a 00:01:40.908 SYMLINK libspdk_bdev_error.so 00:01:40.908 SO libspdk_bdev_passthru.so.6.0 00:01:40.908 SYMLINK libspdk_bdev_zone_block.so 00:01:40.908 LIB libspdk_bdev_iscsi.a 00:01:40.908 LIB libspdk_bdev_aio.a 00:01:40.908 SO libspdk_bdev_delay.so.6.0 00:01:40.908 SYMLINK libspdk_bdev_split.so 00:01:40.908 SO libspdk_bdev_ftl.so.6.0 00:01:41.169 SO libspdk_bdev_malloc.so.6.0 00:01:41.169 SYMLINK libspdk_bdev_gpt.so 00:01:41.169 SYMLINK libspdk_bdev_null.so 00:01:41.169 SO libspdk_bdev_iscsi.so.6.0 00:01:41.169 SO libspdk_bdev_aio.so.6.0 00:01:41.169 SYMLINK libspdk_bdev_passthru.so 00:01:41.169 SYMLINK libspdk_bdev_ftl.so 00:01:41.169 SYMLINK libspdk_bdev_delay.so 00:01:41.169 SYMLINK libspdk_bdev_malloc.so 00:01:41.169 SYMLINK libspdk_bdev_iscsi.so 00:01:41.169 SYMLINK libspdk_bdev_aio.so 00:01:41.169 LIB libspdk_bdev_lvol.a 00:01:41.169 LIB libspdk_bdev_virtio.a 00:01:41.169 SO libspdk_bdev_lvol.so.6.0 00:01:41.169 SO libspdk_bdev_virtio.so.6.0 00:01:41.169 SYMLINK libspdk_bdev_lvol.so 00:01:41.169 SYMLINK libspdk_bdev_virtio.so 00:01:41.429 LIB libspdk_bdev_raid.a 00:01:41.690 SO libspdk_bdev_raid.so.6.0 00:01:41.690 SYMLINK libspdk_bdev_raid.so 00:01:42.633 LIB libspdk_bdev_nvme.a 00:01:42.633 SO libspdk_bdev_nvme.so.7.0 00:01:42.633 SYMLINK libspdk_bdev_nvme.so 00:01:43.577 CC module/event/subsystems/vmd/vmd.o 00:01:43.577 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:43.577 CC module/event/subsystems/iobuf/iobuf.o 00:01:43.577 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:43.577 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:43.577 CC module/event/subsystems/sock/sock.o 00:01:43.577 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:43.577 CC module/event/subsystems/scheduler/scheduler.o 00:01:43.577 CC module/event/subsystems/keyring/keyring.o 00:01:43.577 LIB libspdk_event_keyring.a 00:01:43.577 LIB libspdk_event_vhost_blk.a 00:01:43.577 LIB libspdk_event_vfu_tgt.a 00:01:43.577 LIB libspdk_event_vmd.a 00:01:43.577 SO libspdk_event_keyring.so.1.0 00:01:43.577 LIB libspdk_event_iobuf.a 00:01:43.577 LIB libspdk_event_scheduler.a 00:01:43.577 LIB libspdk_event_sock.a 00:01:43.577 SO libspdk_event_vhost_blk.so.3.0 00:01:43.577 SO libspdk_event_vfu_tgt.so.3.0 00:01:43.577 SO libspdk_event_vmd.so.6.0 00:01:43.577 SO libspdk_event_iobuf.so.3.0 00:01:43.577 SO libspdk_event_sock.so.5.0 00:01:43.577 SO libspdk_event_scheduler.so.4.0 00:01:43.577 SYMLINK libspdk_event_keyring.so 00:01:43.577 SYMLINK libspdk_event_vhost_blk.so 00:01:43.577 SYMLINK libspdk_event_vfu_tgt.so 00:01:43.838 SYMLINK libspdk_event_vmd.so 00:01:43.838 SYMLINK libspdk_event_iobuf.so 00:01:43.838 SYMLINK libspdk_event_sock.so 00:01:43.838 SYMLINK libspdk_event_scheduler.so 00:01:44.099 CC module/event/subsystems/accel/accel.o 00:01:44.359 LIB libspdk_event_accel.a 00:01:44.359 SO libspdk_event_accel.so.6.0 00:01:44.359 SYMLINK libspdk_event_accel.so 00:01:44.620 CC module/event/subsystems/bdev/bdev.o 00:01:44.881 LIB libspdk_event_bdev.a 00:01:44.881 SO libspdk_event_bdev.so.6.0 00:01:44.881 SYMLINK libspdk_event_bdev.so 00:01:45.454 CC module/event/subsystems/scsi/scsi.o 00:01:45.454 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:45.454 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:45.454 CC module/event/subsystems/nbd/nbd.o 00:01:45.454 CC module/event/subsystems/ublk/ublk.o 00:01:45.454 LIB libspdk_event_ublk.a 00:01:45.454 LIB libspdk_event_scsi.a 00:01:45.454 LIB libspdk_event_nbd.a 00:01:45.454 SO libspdk_event_scsi.so.6.0 00:01:45.454 SO libspdk_event_ublk.so.3.0 00:01:45.454 SO libspdk_event_nbd.so.6.0 00:01:45.454 LIB libspdk_event_nvmf.a 00:01:45.454 SYMLINK libspdk_event_scsi.so 00:01:45.454 SYMLINK libspdk_event_ublk.so 00:01:45.719 SYMLINK libspdk_event_nbd.so 00:01:45.719 SO libspdk_event_nvmf.so.6.0 00:01:45.719 SYMLINK libspdk_event_nvmf.so 00:01:45.981 CC module/event/subsystems/iscsi/iscsi.o 00:01:45.981 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:45.981 LIB libspdk_event_vhost_scsi.a 00:01:45.981 LIB libspdk_event_iscsi.a 00:01:46.241 SO libspdk_event_vhost_scsi.so.3.0 00:01:46.241 SO libspdk_event_iscsi.so.6.0 00:01:46.241 SYMLINK libspdk_event_vhost_scsi.so 00:01:46.241 SYMLINK libspdk_event_iscsi.so 00:01:46.503 SO libspdk.so.6.0 00:01:46.503 SYMLINK libspdk.so 00:01:46.765 CC app/trace_record/trace_record.o 00:01:46.765 CXX app/trace/trace.o 00:01:46.765 CC app/spdk_nvme_identify/identify.o 00:01:46.765 CC app/spdk_top/spdk_top.o 00:01:46.765 CC app/spdk_nvme_perf/perf.o 00:01:46.765 CC app/spdk_lspci/spdk_lspci.o 00:01:46.765 CC app/spdk_nvme_discover/discovery_aer.o 00:01:46.765 TEST_HEADER include/spdk/accel_module.h 00:01:46.765 TEST_HEADER include/spdk/accel.h 00:01:46.765 CC test/rpc_client/rpc_client_test.o 00:01:46.765 TEST_HEADER include/spdk/assert.h 00:01:46.765 TEST_HEADER include/spdk/barrier.h 00:01:46.765 TEST_HEADER include/spdk/base64.h 00:01:46.765 TEST_HEADER include/spdk/bdev.h 00:01:46.765 TEST_HEADER include/spdk/bdev_module.h 00:01:46.765 TEST_HEADER include/spdk/bdev_zone.h 00:01:46.765 TEST_HEADER include/spdk/bit_array.h 00:01:46.765 TEST_HEADER include/spdk/bit_pool.h 00:01:46.765 TEST_HEADER include/spdk/blob_bdev.h 00:01:46.765 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:46.765 TEST_HEADER include/spdk/blobfs.h 00:01:46.765 TEST_HEADER include/spdk/blob.h 00:01:46.765 TEST_HEADER include/spdk/conf.h 00:01:46.765 TEST_HEADER include/spdk/config.h 00:01:46.765 TEST_HEADER include/spdk/cpuset.h 00:01:46.765 TEST_HEADER include/spdk/crc16.h 00:01:46.765 TEST_HEADER include/spdk/crc32.h 00:01:46.765 TEST_HEADER include/spdk/crc64.h 00:01:46.765 TEST_HEADER include/spdk/dif.h 00:01:46.765 CC app/iscsi_tgt/iscsi_tgt.o 00:01:46.765 TEST_HEADER include/spdk/env_dpdk.h 00:01:46.765 TEST_HEADER include/spdk/dma.h 00:01:46.765 TEST_HEADER include/spdk/endian.h 00:01:46.765 TEST_HEADER include/spdk/env.h 00:01:46.765 TEST_HEADER include/spdk/event.h 00:01:46.765 TEST_HEADER include/spdk/fd.h 00:01:46.765 CC app/spdk_dd/spdk_dd.o 00:01:46.765 TEST_HEADER include/spdk/fd_group.h 00:01:46.765 TEST_HEADER include/spdk/file.h 00:01:46.765 TEST_HEADER include/spdk/ftl.h 00:01:46.765 CC app/nvmf_tgt/nvmf_main.o 00:01:46.765 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:46.765 TEST_HEADER include/spdk/gpt_spec.h 00:01:46.765 TEST_HEADER include/spdk/hexlify.h 00:01:46.765 TEST_HEADER include/spdk/histogram_data.h 00:01:46.765 TEST_HEADER include/spdk/idxd.h 00:01:46.765 TEST_HEADER include/spdk/init.h 00:01:46.765 TEST_HEADER include/spdk/idxd_spec.h 00:01:46.765 TEST_HEADER include/spdk/ioat.h 00:01:46.765 TEST_HEADER include/spdk/ioat_spec.h 00:01:46.765 TEST_HEADER include/spdk/iscsi_spec.h 00:01:46.765 TEST_HEADER include/spdk/jsonrpc.h 00:01:46.765 TEST_HEADER include/spdk/json.h 00:01:46.765 TEST_HEADER include/spdk/keyring_module.h 00:01:46.765 TEST_HEADER include/spdk/keyring.h 00:01:46.765 TEST_HEADER include/spdk/likely.h 00:01:46.765 TEST_HEADER include/spdk/log.h 00:01:46.765 TEST_HEADER include/spdk/lvol.h 00:01:46.765 TEST_HEADER include/spdk/memory.h 00:01:46.765 TEST_HEADER include/spdk/mmio.h 00:01:46.765 TEST_HEADER include/spdk/nbd.h 00:01:46.765 TEST_HEADER include/spdk/net.h 00:01:46.765 TEST_HEADER include/spdk/notify.h 00:01:46.765 CC app/spdk_tgt/spdk_tgt.o 00:01:46.765 TEST_HEADER include/spdk/nvme.h 00:01:46.765 TEST_HEADER include/spdk/nvme_intel.h 00:01:46.765 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:46.765 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:46.765 TEST_HEADER include/spdk/nvme_zns.h 00:01:46.765 TEST_HEADER include/spdk/nvme_spec.h 00:01:46.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:46.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:46.765 TEST_HEADER include/spdk/nvmf.h 00:01:46.765 TEST_HEADER include/spdk/nvmf_spec.h 00:01:46.765 TEST_HEADER include/spdk/nvmf_transport.h 00:01:46.765 TEST_HEADER include/spdk/opal.h 00:01:47.025 TEST_HEADER include/spdk/opal_spec.h 00:01:47.025 TEST_HEADER include/spdk/pci_ids.h 00:01:47.025 TEST_HEADER include/spdk/pipe.h 00:01:47.025 TEST_HEADER include/spdk/queue.h 00:01:47.025 TEST_HEADER include/spdk/reduce.h 00:01:47.025 TEST_HEADER include/spdk/rpc.h 00:01:47.025 TEST_HEADER include/spdk/scheduler.h 00:01:47.025 TEST_HEADER include/spdk/scsi.h 00:01:47.025 TEST_HEADER include/spdk/scsi_spec.h 00:01:47.025 TEST_HEADER include/spdk/sock.h 00:01:47.025 TEST_HEADER include/spdk/stdinc.h 00:01:47.025 TEST_HEADER include/spdk/string.h 00:01:47.025 TEST_HEADER include/spdk/thread.h 00:01:47.025 TEST_HEADER include/spdk/trace.h 00:01:47.025 TEST_HEADER include/spdk/trace_parser.h 00:01:47.025 TEST_HEADER include/spdk/tree.h 00:01:47.025 TEST_HEADER include/spdk/ublk.h 00:01:47.025 TEST_HEADER include/spdk/util.h 00:01:47.025 TEST_HEADER include/spdk/uuid.h 00:01:47.025 TEST_HEADER include/spdk/version.h 00:01:47.025 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:47.025 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:47.025 TEST_HEADER include/spdk/vmd.h 00:01:47.025 TEST_HEADER include/spdk/vhost.h 00:01:47.025 TEST_HEADER include/spdk/xor.h 00:01:47.025 TEST_HEADER include/spdk/zipf.h 00:01:47.025 CXX test/cpp_headers/accel.o 00:01:47.025 CXX test/cpp_headers/accel_module.o 00:01:47.025 CXX test/cpp_headers/assert.o 00:01:47.025 CXX test/cpp_headers/barrier.o 00:01:47.025 CXX test/cpp_headers/base64.o 00:01:47.025 CXX test/cpp_headers/bdev.o 00:01:47.025 CXX test/cpp_headers/bdev_module.o 00:01:47.025 CXX test/cpp_headers/bdev_zone.o 00:01:47.025 CXX test/cpp_headers/bit_array.o 00:01:47.025 CXX test/cpp_headers/bit_pool.o 00:01:47.025 CXX test/cpp_headers/blob_bdev.o 00:01:47.025 CXX test/cpp_headers/blobfs_bdev.o 00:01:47.025 CXX test/cpp_headers/blobfs.o 00:01:47.025 CXX test/cpp_headers/blob.o 00:01:47.025 CXX test/cpp_headers/cpuset.o 00:01:47.025 CXX test/cpp_headers/config.o 00:01:47.025 CXX test/cpp_headers/conf.o 00:01:47.025 CXX test/cpp_headers/crc16.o 00:01:47.025 CXX test/cpp_headers/crc32.o 00:01:47.025 CXX test/cpp_headers/crc64.o 00:01:47.025 CXX test/cpp_headers/dif.o 00:01:47.025 CXX test/cpp_headers/dma.o 00:01:47.025 CXX test/cpp_headers/env_dpdk.o 00:01:47.025 CXX test/cpp_headers/endian.o 00:01:47.025 CXX test/cpp_headers/fd_group.o 00:01:47.025 CXX test/cpp_headers/event.o 00:01:47.025 CXX test/cpp_headers/env.o 00:01:47.025 CXX test/cpp_headers/fd.o 00:01:47.025 CXX test/cpp_headers/file.o 00:01:47.025 CXX test/cpp_headers/gpt_spec.o 00:01:47.025 CXX test/cpp_headers/ftl.o 00:01:47.025 CXX test/cpp_headers/idxd_spec.o 00:01:47.025 CXX test/cpp_headers/hexlify.o 00:01:47.025 CXX test/cpp_headers/init.o 00:01:47.025 CXX test/cpp_headers/idxd.o 00:01:47.025 CXX test/cpp_headers/histogram_data.o 00:01:47.025 CXX test/cpp_headers/iscsi_spec.o 00:01:47.025 CXX test/cpp_headers/ioat.o 00:01:47.025 CXX test/cpp_headers/json.o 00:01:47.025 CXX test/cpp_headers/ioat_spec.o 00:01:47.025 CXX test/cpp_headers/jsonrpc.o 00:01:47.025 CXX test/cpp_headers/keyring_module.o 00:01:47.025 CXX test/cpp_headers/keyring.o 00:01:47.025 CXX test/cpp_headers/likely.o 00:01:47.025 CXX test/cpp_headers/log.o 00:01:47.025 CXX test/cpp_headers/lvol.o 00:01:47.025 CXX test/cpp_headers/memory.o 00:01:47.025 CXX test/cpp_headers/net.o 00:01:47.025 CXX test/cpp_headers/nbd.o 00:01:47.025 CXX test/cpp_headers/mmio.o 00:01:47.025 CXX test/cpp_headers/notify.o 00:01:47.025 CXX test/cpp_headers/nvme.o 00:01:47.025 CC examples/ioat/perf/perf.o 00:01:47.025 CXX test/cpp_headers/nvme_ocssd.o 00:01:47.025 LINK spdk_lspci 00:01:47.025 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:47.025 CXX test/cpp_headers/nvme_intel.o 00:01:47.025 CXX test/cpp_headers/nvme_spec.o 00:01:47.025 CXX test/cpp_headers/nvmf_cmd.o 00:01:47.025 CXX test/cpp_headers/nvme_zns.o 00:01:47.025 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:47.025 CXX test/cpp_headers/opal.o 00:01:47.025 CXX test/cpp_headers/nvmf.o 00:01:47.025 CXX test/cpp_headers/nvmf_spec.o 00:01:47.025 CXX test/cpp_headers/nvmf_transport.o 00:01:47.025 CC app/fio/nvme/fio_plugin.o 00:01:47.025 CXX test/cpp_headers/pipe.o 00:01:47.026 CXX test/cpp_headers/opal_spec.o 00:01:47.026 CXX test/cpp_headers/pci_ids.o 00:01:47.026 CXX test/cpp_headers/reduce.o 00:01:47.026 CXX test/cpp_headers/queue.o 00:01:47.026 CXX test/cpp_headers/rpc.o 00:01:47.026 CC test/thread/poller_perf/poller_perf.o 00:01:47.026 CXX test/cpp_headers/scheduler.o 00:01:47.026 CXX test/cpp_headers/scsi_spec.o 00:01:47.026 CXX test/cpp_headers/scsi.o 00:01:47.026 CC examples/ioat/verify/verify.o 00:01:47.026 CXX test/cpp_headers/sock.o 00:01:47.026 CXX test/cpp_headers/string.o 00:01:47.026 CXX test/cpp_headers/thread.o 00:01:47.026 CXX test/cpp_headers/trace_parser.o 00:01:47.026 CXX test/cpp_headers/trace.o 00:01:47.026 CXX test/cpp_headers/stdinc.o 00:01:47.026 CXX test/cpp_headers/ublk.o 00:01:47.026 CXX test/cpp_headers/tree.o 00:01:47.026 CXX test/cpp_headers/util.o 00:01:47.026 CXX test/cpp_headers/vfio_user_pci.o 00:01:47.026 CC examples/util/zipf/zipf.o 00:01:47.026 CXX test/cpp_headers/vfio_user_spec.o 00:01:47.026 CXX test/cpp_headers/uuid.o 00:01:47.026 CXX test/cpp_headers/version.o 00:01:47.026 CXX test/cpp_headers/vhost.o 00:01:47.026 CXX test/cpp_headers/xor.o 00:01:47.026 CXX test/cpp_headers/vmd.o 00:01:47.026 CXX test/cpp_headers/zipf.o 00:01:47.026 CC test/app/histogram_perf/histogram_perf.o 00:01:47.026 CC test/env/vtophys/vtophys.o 00:01:47.026 CC test/app/stub/stub.o 00:01:47.026 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:47.026 CC test/app/jsoncat/jsoncat.o 00:01:47.026 LINK spdk_nvme_discover 00:01:47.026 CC test/env/pci/pci_ut.o 00:01:47.026 LINK rpc_client_test 00:01:47.026 CC test/dma/test_dma/test_dma.o 00:01:47.026 CC test/env/memory/memory_ut.o 00:01:47.284 CC test/app/bdev_svc/bdev_svc.o 00:01:47.284 CC app/fio/bdev/fio_plugin.o 00:01:47.284 LINK spdk_trace_record 00:01:47.284 LINK interrupt_tgt 00:01:47.284 LINK nvmf_tgt 00:01:47.284 LINK iscsi_tgt 00:01:47.542 LINK jsoncat 00:01:47.542 LINK zipf 00:01:47.542 CC test/env/mem_callbacks/mem_callbacks.o 00:01:47.542 LINK spdk_trace 00:01:47.542 LINK spdk_tgt 00:01:47.542 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:47.542 LINK spdk_dd 00:01:47.542 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:47.542 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:47.542 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:47.542 LINK ioat_perf 00:01:47.542 LINK vtophys 00:01:47.542 LINK env_dpdk_post_init 00:01:47.802 LINK histogram_perf 00:01:47.802 LINK poller_perf 00:01:47.802 LINK stub 00:01:47.802 LINK bdev_svc 00:01:47.802 CC app/vhost/vhost.o 00:01:47.802 LINK spdk_nvme_perf 00:01:47.802 LINK verify 00:01:47.802 LINK test_dma 00:01:47.802 CC examples/vmd/lsvmd/lsvmd.o 00:01:47.802 CC examples/idxd/perf/perf.o 00:01:47.802 CC examples/vmd/led/led.o 00:01:47.802 CC examples/thread/thread/thread_ex.o 00:01:47.802 CC examples/sock/hello_world/hello_sock.o 00:01:48.062 LINK vhost_fuzz 00:01:48.062 LINK spdk_nvme 00:01:48.062 LINK pci_ut 00:01:48.062 LINK vhost 00:01:48.062 LINK lsvmd 00:01:48.062 LINK nvme_fuzz 00:01:48.062 LINK spdk_nvme_identify 00:01:48.062 LINK led 00:01:48.062 LINK mem_callbacks 00:01:48.062 LINK spdk_bdev 00:01:48.062 LINK spdk_top 00:01:48.062 LINK thread 00:01:48.062 CC test/event/reactor_perf/reactor_perf.o 00:01:48.062 CC test/event/event_perf/event_perf.o 00:01:48.062 CC test/event/reactor/reactor.o 00:01:48.062 LINK hello_sock 00:01:48.062 CC test/event/app_repeat/app_repeat.o 00:01:48.323 CC test/event/scheduler/scheduler.o 00:01:48.323 LINK idxd_perf 00:01:48.323 LINK event_perf 00:01:48.323 LINK reactor_perf 00:01:48.323 LINK reactor 00:01:48.323 CC test/nvme/fused_ordering/fused_ordering.o 00:01:48.323 CC test/nvme/overhead/overhead.o 00:01:48.323 CC test/nvme/connect_stress/connect_stress.o 00:01:48.323 CC test/nvme/reset/reset.o 00:01:48.323 CC test/nvme/compliance/nvme_compliance.o 00:01:48.323 CC test/nvme/err_injection/err_injection.o 00:01:48.323 CC test/nvme/simple_copy/simple_copy.o 00:01:48.323 CC test/nvme/sgl/sgl.o 00:01:48.323 CC test/nvme/fdp/fdp.o 00:01:48.323 CC test/nvme/reserve/reserve.o 00:01:48.323 CC test/nvme/e2edp/nvme_dp.o 00:01:48.323 CC test/nvme/boot_partition/boot_partition.o 00:01:48.323 LINK app_repeat 00:01:48.323 CC test/nvme/startup/startup.o 00:01:48.323 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:48.323 CC test/blobfs/mkfs/mkfs.o 00:01:48.323 CC test/nvme/aer/aer.o 00:01:48.323 CC test/nvme/cuse/cuse.o 00:01:48.323 CC test/accel/dif/dif.o 00:01:48.583 LINK scheduler 00:01:48.583 LINK memory_ut 00:01:48.583 CC test/lvol/esnap/esnap.o 00:01:48.583 LINK err_injection 00:01:48.583 LINK boot_partition 00:01:48.583 LINK connect_stress 00:01:48.583 LINK fused_ordering 00:01:48.583 CC examples/nvme/reconnect/reconnect.o 00:01:48.583 CC examples/nvme/hello_world/hello_world.o 00:01:48.583 LINK startup 00:01:48.583 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.583 LINK doorbell_aers 00:01:48.583 CC examples/nvme/abort/abort.o 00:01:48.583 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.583 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.583 LINK reserve 00:01:48.583 CC examples/nvme/hotplug/hotplug.o 00:01:48.583 CC examples/nvme/arbitration/arbitration.o 00:01:48.583 LINK mkfs 00:01:48.583 LINK simple_copy 00:01:48.583 LINK nvme_dp 00:01:48.583 LINK aer 00:01:48.583 LINK sgl 00:01:48.583 LINK overhead 00:01:48.583 CC examples/accel/perf/accel_perf.o 00:01:48.843 LINK fdp 00:01:48.843 CC examples/blob/hello_world/hello_blob.o 00:01:48.843 CC examples/blob/cli/blobcli.o 00:01:48.843 LINK reset 00:01:48.843 LINK nvme_compliance 00:01:48.843 LINK cmb_copy 00:01:48.843 LINK hello_world 00:01:48.843 LINK pmr_persistence 00:01:48.843 LINK dif 00:01:48.843 LINK iscsi_fuzz 00:01:48.843 LINK hotplug 00:01:48.843 LINK reconnect 00:01:48.843 LINK arbitration 00:01:48.843 LINK abort 00:01:49.103 LINK hello_blob 00:01:49.103 LINK nvme_manage 00:01:49.103 LINK accel_perf 00:01:49.103 LINK blobcli 00:01:49.397 CC test/bdev/bdevio/bdevio.o 00:01:49.657 LINK cuse 00:01:49.657 CC examples/bdev/bdevperf/bdevperf.o 00:01:49.657 CC examples/bdev/hello_world/hello_bdev.o 00:01:49.918 LINK bdevio 00:01:49.918 LINK hello_bdev 00:01:50.488 LINK bdevperf 00:01:51.059 CC examples/nvmf/nvmf/nvmf.o 00:01:51.319 LINK nvmf 00:01:52.702 LINK esnap 00:01:52.963 00:01:52.963 real 0m50.706s 00:01:52.963 user 6m29.947s 00:01:52.963 sys 4m9.795s 00:01:52.963 22:50:10 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.963 22:50:10 make -- common/autotest_common.sh@10 -- $ set +x 00:01:52.963 ************************************ 00:01:52.963 END TEST make 00:01:52.963 ************************************ 00:01:52.963 22:50:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:52.963 22:50:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:52.963 22:50:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:52.963 22:50:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.963 22:50:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:52.963 22:50:10 -- pm/common@44 -- $ pid=492548 00:01:52.963 22:50:10 -- pm/common@50 -- $ kill -TERM 492548 00:01:52.963 22:50:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.963 22:50:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:52.963 22:50:10 -- pm/common@44 -- $ pid=492549 00:01:52.963 22:50:10 -- pm/common@50 -- $ kill -TERM 492549 00:01:52.963 22:50:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.963 22:50:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:52.963 22:50:10 -- pm/common@44 -- $ pid=492551 00:01:52.963 22:50:10 -- pm/common@50 -- $ kill -TERM 492551 00:01:52.963 22:50:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.963 22:50:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:52.963 22:50:10 -- pm/common@44 -- $ pid=492575 00:01:52.963 22:50:10 -- pm/common@50 -- $ sudo -E kill -TERM 492575 00:01:53.225 22:50:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.225 22:50:10 -- nvmf/common.sh@7 -- # uname -s 00:01:53.225 22:50:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.225 22:50:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.225 22:50:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.225 22:50:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.225 22:50:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.225 22:50:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.225 22:50:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.225 22:50:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.225 22:50:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.225 22:50:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.225 22:50:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:53.225 22:50:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:53.225 22:50:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.225 22:50:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.225 22:50:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:53.225 22:50:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.225 22:50:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.225 22:50:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.225 22:50:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.225 22:50:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.225 22:50:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.225 22:50:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.225 22:50:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.225 22:50:10 -- paths/export.sh@5 -- # export PATH 00:01:53.225 22:50:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.225 22:50:10 -- nvmf/common.sh@47 -- # : 0 00:01:53.225 22:50:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.225 22:50:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.225 22:50:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.225 22:50:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.225 22:50:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.225 22:50:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.225 22:50:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.225 22:50:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.225 22:50:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.225 22:50:10 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.225 22:50:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.225 22:50:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.225 22:50:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.225 22:50:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.225 22:50:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.225 22:50:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.225 22:50:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.225 22:50:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.225 22:50:10 -- spdk/autotest.sh@48 -- # udevadm_pid=555670 00:01:53.225 22:50:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.225 22:50:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.225 22:50:10 -- pm/common@17 -- # local monitor 00:01:53.225 22:50:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 22:50:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 22:50:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 22:50:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 22:50:10 -- pm/common@21 -- # date +%s 00:01:53.225 22:50:10 -- pm/common@25 -- # sleep 1 00:01:53.225 22:50:10 -- pm/common@21 -- # date +%s 00:01:53.225 22:50:10 -- pm/common@21 -- # date +%s 00:01:53.225 22:50:10 -- pm/common@21 -- # date +%s 00:01:53.225 22:50:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721854210 00:01:53.225 22:50:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721854210 00:01:53.225 22:50:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721854210 00:01:53.225 22:50:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721854210 00:01:53.225 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721854210_collect-vmstat.pm.log 00:01:53.225 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721854210_collect-cpu-load.pm.log 00:01:53.225 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721854210_collect-cpu-temp.pm.log 00:01:53.225 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721854210_collect-bmc-pm.bmc.pm.log 00:01:54.165 22:50:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.165 22:50:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.165 22:50:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:54.165 22:50:11 -- common/autotest_common.sh@10 -- # set +x 00:01:54.165 22:50:11 -- spdk/autotest.sh@59 -- # create_test_list 00:01:54.165 22:50:11 -- common/autotest_common.sh@748 -- # xtrace_disable 00:01:54.165 22:50:11 -- common/autotest_common.sh@10 -- # set +x 00:01:54.425 22:50:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:54.425 22:50:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.425 22:50:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.425 22:50:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:54.425 22:50:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.425 22:50:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:54.425 22:50:11 -- common/autotest_common.sh@1455 -- # uname 00:01:54.425 22:50:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:54.425 22:50:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:54.425 22:50:11 -- common/autotest_common.sh@1475 -- # uname 00:01:54.425 22:50:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:54.425 22:50:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:54.425 22:50:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:54.425 22:50:11 -- spdk/autotest.sh@72 -- # hash lcov 00:01:54.425 22:50:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:54.425 22:50:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:54.425 --rc lcov_branch_coverage=1 00:01:54.425 --rc lcov_function_coverage=1 00:01:54.425 --rc genhtml_branch_coverage=1 00:01:54.425 --rc genhtml_function_coverage=1 00:01:54.425 --rc genhtml_legend=1 00:01:54.425 --rc geninfo_all_blocks=1 00:01:54.425 ' 00:01:54.425 22:50:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:54.425 --rc lcov_branch_coverage=1 00:01:54.425 --rc lcov_function_coverage=1 00:01:54.425 --rc genhtml_branch_coverage=1 00:01:54.425 --rc genhtml_function_coverage=1 00:01:54.425 --rc genhtml_legend=1 00:01:54.425 --rc geninfo_all_blocks=1 00:01:54.425 ' 00:01:54.425 22:50:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:54.425 --rc lcov_branch_coverage=1 00:01:54.425 --rc lcov_function_coverage=1 00:01:54.425 --rc genhtml_branch_coverage=1 00:01:54.425 --rc genhtml_function_coverage=1 00:01:54.425 --rc genhtml_legend=1 00:01:54.425 --rc geninfo_all_blocks=1 00:01:54.425 --no-external' 00:01:54.425 22:50:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:54.425 --rc lcov_branch_coverage=1 00:01:54.425 --rc lcov_function_coverage=1 00:01:54.425 --rc genhtml_branch_coverage=1 00:01:54.425 --rc genhtml_function_coverage=1 00:01:54.425 --rc genhtml_legend=1 00:01:54.425 --rc geninfo_all_blocks=1 00:01:54.425 --no-external' 00:01:54.425 22:50:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:54.425 lcov: LCOV version 1.14 00:01:54.425 22:50:12 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:06.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:06.691 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:18.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.926 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.926 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.926 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.927 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:19.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:19.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:19.451 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:23.656 22:50:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.656 22:50:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:23.656 22:50:41 -- common/autotest_common.sh@10 -- # set +x 00:02:23.657 22:50:41 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.657 22:50:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.860 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:27.860 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:27.860 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:27.860 22:50:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:27.860 22:50:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:27.860 22:50:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:27.860 22:50:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:27.860 22:50:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:27.861 22:50:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:27.861 22:50:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:27.861 22:50:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.861 22:50:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:27.861 22:50:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:27.861 22:50:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:27.861 22:50:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:27.861 22:50:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:27.861 22:50:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:27.861 22:50:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:27.861 No valid GPT data, bailing 00:02:27.861 22:50:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:27.861 22:50:45 -- scripts/common.sh@391 -- # pt= 00:02:27.861 22:50:45 -- scripts/common.sh@392 -- # return 1 00:02:27.861 22:50:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:27.861 1+0 records in 00:02:27.861 1+0 records out 00:02:27.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00168277 s, 623 MB/s 00:02:27.861 22:50:45 -- spdk/autotest.sh@118 -- # sync 00:02:27.861 22:50:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:27.861 22:50:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:27.861 22:50:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:35.999 22:50:53 -- spdk/autotest.sh@124 -- # uname -s 00:02:35.999 22:50:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:35.999 22:50:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.999 22:50:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:35.999 22:50:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:35.999 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:02:35.999 ************************************ 00:02:35.999 START TEST setup.sh 00:02:35.999 ************************************ 00:02:35.999 22:50:53 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.999 * Looking for test storage... 00:02:35.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.999 22:50:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:35.999 22:50:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:35.999 22:50:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.999 22:50:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:35.999 22:50:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:35.999 22:50:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:35.999 ************************************ 00:02:35.999 START TEST acl 00:02:35.999 ************************************ 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.999 * Looking for test storage... 00:02:35.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.999 22:50:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:35.999 22:50:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:35.999 22:50:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.999 22:50:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.205 22:50:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:40.205 22:50:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:40.205 22:50:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.205 22:50:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:40.205 22:50:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.205 22:50:57 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:44.414 Hugepages 00:02:44.414 node hugesize free / total 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 00:02:44.414 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:44.414 22:51:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:44.414 22:51:01 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:44.414 22:51:01 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:44.414 22:51:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.414 ************************************ 00:02:44.414 START TEST denied 00:02:44.414 ************************************ 00:02:44.414 22:51:01 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:44.414 22:51:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:44.414 22:51:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:44.414 22:51:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:44.414 22:51:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.414 22:51:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.748 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.748 22:51:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.034 00:02:53.034 real 0m7.996s 00:02:53.034 user 0m2.399s 00:02:53.034 sys 0m4.695s 00:02:53.034 22:51:09 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:53.034 22:51:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:53.034 ************************************ 00:02:53.034 END TEST denied 00:02:53.034 ************************************ 00:02:53.034 22:51:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:53.034 22:51:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:53.034 22:51:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:53.034 22:51:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.034 ************************************ 00:02:53.034 START TEST allowed 00:02:53.034 ************************************ 00:02:53.034 22:51:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:53.034 22:51:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:02:53.034 22:51:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:53.034 22:51:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:02:53.034 22:51:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.034 22:51:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.318 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:58.318 22:51:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:58.318 22:51:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:58.318 22:51:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:58.318 22:51:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.318 22:51:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.526 00:03:02.526 real 0m10.099s 00:03:02.526 user 0m3.028s 00:03:02.526 sys 0m5.410s 00:03:02.526 22:51:19 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:02.526 22:51:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:02.526 ************************************ 00:03:02.526 END TEST allowed 00:03:02.526 ************************************ 00:03:02.526 00:03:02.526 real 0m26.503s 00:03:02.526 user 0m8.509s 00:03:02.526 sys 0m15.644s 00:03:02.526 22:51:19 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:02.526 22:51:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.526 ************************************ 00:03:02.526 END TEST acl 00:03:02.526 ************************************ 00:03:02.526 22:51:20 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:02.526 22:51:20 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:02.526 22:51:20 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:02.526 22:51:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.526 ************************************ 00:03:02.526 START TEST hugepages 00:03:02.526 ************************************ 00:03:02.526 22:51:20 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:02.526 * Looking for test storage... 00:03:02.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 105683520 kB' 'MemAvailable: 110308180 kB' 'Buffers: 2704 kB' 'Cached: 11465192 kB' 'SwapCached: 0 kB' 'Active: 7303608 kB' 'Inactive: 4665652 kB' 'Active(anon): 6906560 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505056 kB' 'Mapped: 180264 kB' 'Shmem: 6405196 kB' 'KReclaimable: 602904 kB' 'Slab: 1468004 kB' 'SReclaimable: 602904 kB' 'SUnreclaim: 865100 kB' 'KernelStack: 27280 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460880 kB' 'Committed_AS: 8485396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238748 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.526 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.527 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:02.528 22:51:20 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:02.528 22:51:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:02.528 22:51:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:02.528 22:51:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.528 ************************************ 00:03:02.528 START TEST default_setup 00:03:02.528 ************************************ 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.528 22:51:20 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.736 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:06.736 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:06.736 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107880264 kB' 'MemAvailable: 112504892 kB' 'Buffers: 2704 kB' 'Cached: 11465324 kB' 'SwapCached: 0 kB' 'Active: 7321688 kB' 'Inactive: 4665652 kB' 'Active(anon): 6924640 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522256 kB' 'Mapped: 180508 kB' 'Shmem: 6405328 kB' 'KReclaimable: 602872 kB' 'Slab: 1465912 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863040 kB' 'KernelStack: 27312 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8507224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238860 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.737 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107882116 kB' 'MemAvailable: 112506744 kB' 'Buffers: 2704 kB' 'Cached: 11465328 kB' 'SwapCached: 0 kB' 'Active: 7321756 kB' 'Inactive: 4665652 kB' 'Active(anon): 6924708 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522868 kB' 'Mapped: 180448 kB' 'Shmem: 6405332 kB' 'KReclaimable: 602872 kB' 'Slab: 1465904 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863032 kB' 'KernelStack: 27456 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8508852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.738 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.739 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107883852 kB' 'MemAvailable: 112508480 kB' 'Buffers: 2704 kB' 'Cached: 11465344 kB' 'SwapCached: 0 kB' 'Active: 7321800 kB' 'Inactive: 4665652 kB' 'Active(anon): 6924752 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522660 kB' 'Mapped: 180380 kB' 'Shmem: 6405348 kB' 'KReclaimable: 602872 kB' 'Slab: 1465904 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863032 kB' 'KernelStack: 27376 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8508872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238972 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.740 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.741 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.742 nr_hugepages=1024 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.742 resv_hugepages=0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.742 surplus_hugepages=0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.742 anon_hugepages=0 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.742 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107888696 kB' 'MemAvailable: 112513324 kB' 'Buffers: 2704 kB' 'Cached: 11465344 kB' 'SwapCached: 0 kB' 'Active: 7322032 kB' 'Inactive: 4665652 kB' 'Active(anon): 6924984 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522880 kB' 'Mapped: 180380 kB' 'Shmem: 6405348 kB' 'KReclaimable: 602872 kB' 'Slab: 1465904 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863032 kB' 'KernelStack: 27504 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8508896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238972 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.743 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.744 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58316332 kB' 'MemUsed: 7342676 kB' 'SwapCached: 0 kB' 'Active: 1946876 kB' 'Inactive: 1037188 kB' 'Active(anon): 1728268 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868256 kB' 'Mapped: 95040 kB' 'AnonPages: 119016 kB' 'Shmem: 1612460 kB' 'KernelStack: 13752 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620320 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.745 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.746 node0=1024 expecting 1024 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.746 00:03:06.746 real 0m4.187s 00:03:06.746 user 0m1.645s 00:03:06.746 sys 0m2.519s 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:06.746 22:51:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:06.746 ************************************ 00:03:06.746 END TEST default_setup 00:03:06.746 ************************************ 00:03:06.746 22:51:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:06.746 22:51:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:06.746 22:51:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:06.746 22:51:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.746 ************************************ 00:03:06.746 START TEST per_node_1G_alloc 00:03:06.746 ************************************ 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.746 22:51:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.957 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:10.957 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107890580 kB' 'MemAvailable: 112515208 kB' 'Buffers: 2704 kB' 'Cached: 11465484 kB' 'SwapCached: 0 kB' 'Active: 7320212 kB' 'Inactive: 4665652 kB' 'Active(anon): 6923164 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520436 kB' 'Mapped: 179520 kB' 'Shmem: 6405488 kB' 'KReclaimable: 602872 kB' 'Slab: 1466412 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863540 kB' 'KernelStack: 27392 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8492288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238908 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107891000 kB' 'MemAvailable: 112515628 kB' 'Buffers: 2704 kB' 'Cached: 11465488 kB' 'SwapCached: 0 kB' 'Active: 7319820 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922772 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520068 kB' 'Mapped: 179456 kB' 'Shmem: 6405492 kB' 'KReclaimable: 602872 kB' 'Slab: 1466412 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863540 kB' 'KernelStack: 27376 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8492308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.959 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.960 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107891420 kB' 'MemAvailable: 112516048 kB' 'Buffers: 2704 kB' 'Cached: 11465488 kB' 'SwapCached: 0 kB' 'Active: 7319144 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922096 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519840 kB' 'Mapped: 179376 kB' 'Shmem: 6405492 kB' 'KReclaimable: 602872 kB' 'Slab: 1466404 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863532 kB' 'KernelStack: 27360 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8492328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.961 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.962 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.963 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.964 nr_hugepages=1024 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.964 resv_hugepages=0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.964 surplus_hugepages=0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.964 anon_hugepages=0 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107891308 kB' 'MemAvailable: 112515936 kB' 'Buffers: 2704 kB' 'Cached: 11465548 kB' 'SwapCached: 0 kB' 'Active: 7318604 kB' 'Inactive: 4665652 kB' 'Active(anon): 6921556 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519208 kB' 'Mapped: 179376 kB' 'Shmem: 6405552 kB' 'KReclaimable: 602872 kB' 'Slab: 1466404 kB' 'SReclaimable: 602872 kB' 'SUnreclaim: 863532 kB' 'KernelStack: 27248 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8492352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.964 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.965 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59387756 kB' 'MemUsed: 6271252 kB' 'SwapCached: 0 kB' 'Active: 1945356 kB' 'Inactive: 1037188 kB' 'Active(anon): 1726748 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868352 kB' 'Mapped: 94452 kB' 'AnonPages: 117440 kB' 'Shmem: 1612556 kB' 'KernelStack: 13752 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620424 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.966 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.967 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48499152 kB' 'MemUsed: 12180700 kB' 'SwapCached: 0 kB' 'Active: 5377976 kB' 'Inactive: 3628464 kB' 'Active(anon): 5199536 kB' 'Inactive(anon): 0 kB' 'Active(file): 178440 kB' 'Inactive(file): 3628464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8599904 kB' 'Mapped: 85428 kB' 'AnonPages: 406544 kB' 'Shmem: 4793000 kB' 'KernelStack: 13480 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 402844 kB' 'Slab: 845980 kB' 'SReclaimable: 402844 kB' 'SUnreclaim: 443136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.968 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.969 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.969 node0=512 expecting 512 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.970 node1=512 expecting 512 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.970 00:03:10.970 real 0m4.048s 00:03:10.970 user 0m1.553s 00:03:10.970 sys 0m2.556s 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:10.970 22:51:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.970 ************************************ 00:03:10.970 END TEST per_node_1G_alloc 00:03:10.970 ************************************ 00:03:10.970 22:51:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.970 22:51:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:10.970 22:51:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:10.970 22:51:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.970 ************************************ 00:03:10.970 START TEST even_2G_alloc 00:03:10.970 ************************************ 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.970 22:51:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.180 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:15.180 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.180 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107894616 kB' 'MemAvailable: 112519252 kB' 'Buffers: 2704 kB' 'Cached: 11465684 kB' 'SwapCached: 0 kB' 'Active: 7318316 kB' 'Inactive: 4665652 kB' 'Active(anon): 6921268 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518572 kB' 'Mapped: 179496 kB' 'Shmem: 6405688 kB' 'KReclaimable: 602880 kB' 'Slab: 1466340 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863460 kB' 'KernelStack: 27280 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8493340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238812 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.181 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107895236 kB' 'MemAvailable: 112519872 kB' 'Buffers: 2704 kB' 'Cached: 11465688 kB' 'SwapCached: 0 kB' 'Active: 7318344 kB' 'Inactive: 4665652 kB' 'Active(anon): 6921296 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518632 kB' 'Mapped: 179468 kB' 'Shmem: 6405692 kB' 'KReclaimable: 602880 kB' 'Slab: 1466308 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863428 kB' 'KernelStack: 27264 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8493360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238780 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.182 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.183 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107895236 kB' 'MemAvailable: 112519872 kB' 'Buffers: 2704 kB' 'Cached: 11465704 kB' 'SwapCached: 0 kB' 'Active: 7318232 kB' 'Inactive: 4665652 kB' 'Active(anon): 6921184 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518500 kB' 'Mapped: 179468 kB' 'Shmem: 6405708 kB' 'KReclaimable: 602880 kB' 'Slab: 1466308 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863428 kB' 'KernelStack: 27248 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8493380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238780 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.184 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.185 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.186 nr_hugepages=1024 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.186 resv_hugepages=0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.186 surplus_hugepages=0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.186 anon_hugepages=0 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.186 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107895236 kB' 'MemAvailable: 112519872 kB' 'Buffers: 2704 kB' 'Cached: 11465744 kB' 'SwapCached: 0 kB' 'Active: 7318020 kB' 'Inactive: 4665652 kB' 'Active(anon): 6920972 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518244 kB' 'Mapped: 179468 kB' 'Shmem: 6405748 kB' 'KReclaimable: 602880 kB' 'Slab: 1466308 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863428 kB' 'KernelStack: 27248 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8493404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238780 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.188 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59391096 kB' 'MemUsed: 6267912 kB' 'SwapCached: 0 kB' 'Active: 1944664 kB' 'Inactive: 1037188 kB' 'Active(anon): 1726056 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868496 kB' 'Mapped: 94544 kB' 'AnonPages: 116600 kB' 'Shmem: 1612700 kB' 'KernelStack: 13768 kB' 'PageTables: 3080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620252 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.189 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48505492 kB' 'MemUsed: 12174360 kB' 'SwapCached: 0 kB' 'Active: 5374324 kB' 'Inactive: 3628464 kB' 'Active(anon): 5195884 kB' 'Inactive(anon): 0 kB' 'Active(file): 178440 kB' 'Inactive(file): 3628464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8599956 kB' 'Mapped: 84924 kB' 'AnonPages: 402668 kB' 'Shmem: 4793052 kB' 'KernelStack: 13560 kB' 'PageTables: 5480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 402852 kB' 'Slab: 846056 kB' 'SReclaimable: 402852 kB' 'SUnreclaim: 443204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.193 node0=512 expecting 512 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.193 node1=512 expecting 512 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.193 00:03:15.193 real 0m4.064s 00:03:15.193 user 0m1.691s 00:03:15.193 sys 0m2.444s 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.193 22:51:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.193 ************************************ 00:03:15.193 END TEST even_2G_alloc 00:03:15.193 ************************************ 00:03:15.193 22:51:32 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:15.193 22:51:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.193 22:51:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.193 22:51:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.193 ************************************ 00:03:15.193 START TEST odd_alloc 00:03:15.193 ************************************ 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.193 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.194 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:15.194 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:15.194 22:51:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:15.194 22:51:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.194 22:51:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.449 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:19.449 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107909140 kB' 'MemAvailable: 112533776 kB' 'Buffers: 2704 kB' 'Cached: 11465880 kB' 'SwapCached: 0 kB' 'Active: 7319608 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922560 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519996 kB' 'Mapped: 179496 kB' 'Shmem: 6405884 kB' 'KReclaimable: 602880 kB' 'Slab: 1466212 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863332 kB' 'KernelStack: 27376 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8497364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 239100 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.449 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.450 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107909104 kB' 'MemAvailable: 112533740 kB' 'Buffers: 2704 kB' 'Cached: 11465884 kB' 'SwapCached: 0 kB' 'Active: 7319532 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922484 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519984 kB' 'Mapped: 179432 kB' 'Shmem: 6405888 kB' 'KReclaimable: 602880 kB' 'Slab: 1466208 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863328 kB' 'KernelStack: 27408 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8497252 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 239068 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.451 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.452 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107907908 kB' 'MemAvailable: 112532544 kB' 'Buffers: 2704 kB' 'Cached: 11465900 kB' 'SwapCached: 0 kB' 'Active: 7319856 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922808 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520256 kB' 'Mapped: 179432 kB' 'Shmem: 6405904 kB' 'KReclaimable: 602880 kB' 'Slab: 1466268 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863388 kB' 'KernelStack: 27392 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8497404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 239020 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.453 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:19.454 nr_hugepages=1025 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.454 resv_hugepages=0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.454 surplus_hugepages=0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.454 anon_hugepages=0 00:03:19.454 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107906984 kB' 'MemAvailable: 112531620 kB' 'Buffers: 2704 kB' 'Cached: 11465920 kB' 'SwapCached: 0 kB' 'Active: 7319380 kB' 'Inactive: 4665652 kB' 'Active(anon): 6922332 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519752 kB' 'Mapped: 179432 kB' 'Shmem: 6405924 kB' 'KReclaimable: 602880 kB' 'Slab: 1466268 kB' 'SReclaimable: 602880 kB' 'SUnreclaim: 863388 kB' 'KernelStack: 27456 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508432 kB' 'Committed_AS: 8497424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 239036 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.455 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.456 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59406888 kB' 'MemUsed: 6252120 kB' 'SwapCached: 0 kB' 'Active: 1944816 kB' 'Inactive: 1037188 kB' 'Active(anon): 1726208 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868636 kB' 'Mapped: 94484 kB' 'AnonPages: 116556 kB' 'Shmem: 1612840 kB' 'KernelStack: 13768 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620236 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.457 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 48500228 kB' 'MemUsed: 12179624 kB' 'SwapCached: 0 kB' 'Active: 5374672 kB' 'Inactive: 3628464 kB' 'Active(anon): 5196232 kB' 'Inactive(anon): 0 kB' 'Active(file): 178440 kB' 'Inactive(file): 3628464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8600012 kB' 'Mapped: 84948 kB' 'AnonPages: 403284 kB' 'Shmem: 4793108 kB' 'KernelStack: 13640 kB' 'PageTables: 5548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 402852 kB' 'Slab: 846032 kB' 'SReclaimable: 402852 kB' 'SUnreclaim: 443180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.458 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.459 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:19.460 node0=512 expecting 513 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:19.460 node1=513 expecting 512 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:19.460 00:03:19.460 real 0m4.100s 00:03:19.460 user 0m1.652s 00:03:19.460 sys 0m2.518s 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:19.460 22:51:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.460 ************************************ 00:03:19.460 END TEST odd_alloc 00:03:19.460 ************************************ 00:03:19.460 22:51:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:19.460 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:19.460 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:19.460 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.460 ************************************ 00:03:19.460 START TEST custom_alloc 00:03:19.460 ************************************ 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.460 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.461 22:51:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.682 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:23.682 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 106870248 kB' 'MemAvailable: 111494852 kB' 'Buffers: 2704 kB' 'Cached: 11466052 kB' 'SwapCached: 0 kB' 'Active: 7322652 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925604 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522416 kB' 'Mapped: 179564 kB' 'Shmem: 6406056 kB' 'KReclaimable: 602848 kB' 'Slab: 1466156 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 863308 kB' 'KernelStack: 27264 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8495208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238940 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.682 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.683 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 106870060 kB' 'MemAvailable: 111494664 kB' 'Buffers: 2704 kB' 'Cached: 11466056 kB' 'SwapCached: 0 kB' 'Active: 7322412 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925364 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522160 kB' 'Mapped: 179508 kB' 'Shmem: 6406060 kB' 'KReclaimable: 602848 kB' 'Slab: 1466132 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 863284 kB' 'KernelStack: 27216 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8495228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238860 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.684 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.685 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.686 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 106871484 kB' 'MemAvailable: 111496088 kB' 'Buffers: 2704 kB' 'Cached: 11466072 kB' 'SwapCached: 0 kB' 'Active: 7321968 kB' 'Inactive: 4665652 kB' 'Active(anon): 6924920 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522192 kB' 'Mapped: 179428 kB' 'Shmem: 6406076 kB' 'KReclaimable: 602848 kB' 'Slab: 1466124 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 863276 kB' 'KernelStack: 27248 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8495616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.687 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.688 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:23.689 nr_hugepages=1536 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.689 resv_hugepages=0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.689 surplus_hugepages=0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.689 anon_hugepages=0 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.689 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 106871868 kB' 'MemAvailable: 111496472 kB' 'Buffers: 2704 kB' 'Cached: 11466108 kB' 'SwapCached: 0 kB' 'Active: 7322092 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925044 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522244 kB' 'Mapped: 179428 kB' 'Shmem: 6406112 kB' 'KReclaimable: 602848 kB' 'Slab: 1466124 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 863276 kB' 'KernelStack: 27248 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985168 kB' 'Committed_AS: 8495636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238876 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.690 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.691 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59399332 kB' 'MemUsed: 6259676 kB' 'SwapCached: 0 kB' 'Active: 1948340 kB' 'Inactive: 1037188 kB' 'Active(anon): 1729732 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868824 kB' 'Mapped: 94504 kB' 'AnonPages: 119896 kB' 'Shmem: 1613028 kB' 'KernelStack: 13752 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620244 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.692 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679852 kB' 'MemFree: 47471700 kB' 'MemUsed: 13208152 kB' 'SwapCached: 0 kB' 'Active: 5373788 kB' 'Inactive: 3628464 kB' 'Active(anon): 5195348 kB' 'Inactive(anon): 0 kB' 'Active(file): 178440 kB' 'Inactive(file): 3628464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8600012 kB' 'Mapped: 84924 kB' 'AnonPages: 402372 kB' 'Shmem: 4793108 kB' 'KernelStack: 13512 kB' 'PageTables: 5320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 402820 kB' 'Slab: 845880 kB' 'SReclaimable: 402820 kB' 'SUnreclaim: 443060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.693 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.694 22:51:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.695 node0=512 expecting 512 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:23.695 node1=1024 expecting 1024 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:23.695 00:03:23.695 real 0m4.045s 00:03:23.695 user 0m1.645s 00:03:23.695 sys 0m2.471s 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.695 22:51:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.695 ************************************ 00:03:23.695 END TEST custom_alloc 00:03:23.695 ************************************ 00:03:23.695 22:51:41 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:23.695 22:51:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.695 22:51:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.695 22:51:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.695 ************************************ 00:03:23.695 START TEST no_shrink_alloc 00:03:23.695 ************************************ 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.695 22:51:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.995 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.995 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.995 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107921040 kB' 'MemAvailable: 112545644 kB' 'Buffers: 2704 kB' 'Cached: 11466244 kB' 'SwapCached: 0 kB' 'Active: 7322528 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925480 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522476 kB' 'Mapped: 179544 kB' 'Shmem: 6406248 kB' 'KReclaimable: 602848 kB' 'Slab: 1465728 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862880 kB' 'KernelStack: 27472 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8499396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238988 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.261 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107921952 kB' 'MemAvailable: 112546556 kB' 'Buffers: 2704 kB' 'Cached: 11466244 kB' 'SwapCached: 0 kB' 'Active: 7322056 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925008 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522024 kB' 'Mapped: 179544 kB' 'Shmem: 6406248 kB' 'KReclaimable: 602848 kB' 'Slab: 1465692 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862844 kB' 'KernelStack: 27264 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8499620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238908 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.262 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.263 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107919556 kB' 'MemAvailable: 112544160 kB' 'Buffers: 2704 kB' 'Cached: 11466264 kB' 'SwapCached: 0 kB' 'Active: 7323708 kB' 'Inactive: 4665652 kB' 'Active(anon): 6926660 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523608 kB' 'Mapped: 179988 kB' 'Shmem: 6406268 kB' 'KReclaimable: 602848 kB' 'Slab: 1465728 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862880 kB' 'KernelStack: 27280 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8500456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238908 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.264 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.265 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.266 nr_hugepages=1024 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.266 resv_hugepages=0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.266 surplus_hugepages=0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.266 anon_hugepages=0 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107916732 kB' 'MemAvailable: 112541336 kB' 'Buffers: 2704 kB' 'Cached: 11466284 kB' 'SwapCached: 0 kB' 'Active: 7328024 kB' 'Inactive: 4665652 kB' 'Active(anon): 6930976 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528020 kB' 'Mapped: 180268 kB' 'Shmem: 6406288 kB' 'KReclaimable: 602848 kB' 'Slab: 1465728 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862880 kB' 'KernelStack: 27328 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8504052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238928 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.266 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.267 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.268 22:51:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58378296 kB' 'MemUsed: 7280712 kB' 'SwapCached: 0 kB' 'Active: 1948580 kB' 'Inactive: 1037188 kB' 'Active(anon): 1729972 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868928 kB' 'Mapped: 94700 kB' 'AnonPages: 120072 kB' 'Shmem: 1613132 kB' 'KernelStack: 13912 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620408 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 420380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.268 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.269 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.270 node0=1024 expecting 1024 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.270 22:51:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.481 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.481 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.481 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.481 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107942060 kB' 'MemAvailable: 112566664 kB' 'Buffers: 2704 kB' 'Cached: 11466400 kB' 'SwapCached: 0 kB' 'Active: 7323872 kB' 'Inactive: 4665652 kB' 'Active(anon): 6926824 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523148 kB' 'Mapped: 179664 kB' 'Shmem: 6406404 kB' 'KReclaimable: 602848 kB' 'Slab: 1465272 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862424 kB' 'KernelStack: 27280 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8497636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238940 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.482 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.483 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107943124 kB' 'MemAvailable: 112567728 kB' 'Buffers: 2704 kB' 'Cached: 11466400 kB' 'SwapCached: 0 kB' 'Active: 7322644 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925596 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522332 kB' 'Mapped: 179472 kB' 'Shmem: 6406404 kB' 'KReclaimable: 602848 kB' 'Slab: 1465264 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862416 kB' 'KernelStack: 27232 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8497652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238908 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.484 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.485 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.486 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107943484 kB' 'MemAvailable: 112568088 kB' 'Buffers: 2704 kB' 'Cached: 11466424 kB' 'SwapCached: 0 kB' 'Active: 7322820 kB' 'Inactive: 4665652 kB' 'Active(anon): 6925772 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522548 kB' 'Mapped: 179472 kB' 'Shmem: 6406428 kB' 'KReclaimable: 602848 kB' 'Slab: 1465264 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862416 kB' 'KernelStack: 27264 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8497676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238908 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.487 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.488 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.489 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.490 nr_hugepages=1024 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.490 resv_hugepages=0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.490 surplus_hugepages=0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.490 anon_hugepages=0 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338860 kB' 'MemFree: 107943736 kB' 'MemAvailable: 112568340 kB' 'Buffers: 2704 kB' 'Cached: 11466444 kB' 'SwapCached: 0 kB' 'Active: 7323060 kB' 'Inactive: 4665652 kB' 'Active(anon): 6926012 kB' 'Inactive(anon): 0 kB' 'Active(file): 397048 kB' 'Inactive(file): 4665652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522816 kB' 'Mapped: 179472 kB' 'Shmem: 6406448 kB' 'KReclaimable: 602848 kB' 'Slab: 1465264 kB' 'SReclaimable: 602848 kB' 'SUnreclaim: 862416 kB' 'KernelStack: 27296 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509456 kB' 'Committed_AS: 8497696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238940 kB' 'VmallocChunk: 0 kB' 'Percpu: 209664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3718516 kB' 'DirectMap2M: 22175744 kB' 'DirectMap1G: 110100480 kB' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.490 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.491 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.492 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58394868 kB' 'MemUsed: 7264140 kB' 'SwapCached: 0 kB' 'Active: 1946748 kB' 'Inactive: 1037188 kB' 'Active(anon): 1728140 kB' 'Inactive(anon): 0 kB' 'Active(file): 218608 kB' 'Inactive(file): 1037188 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2868948 kB' 'Mapped: 94548 kB' 'AnonPages: 118140 kB' 'Shmem: 1613152 kB' 'KernelStack: 13720 kB' 'PageTables: 2920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 200028 kB' 'Slab: 620004 kB' 'SReclaimable: 200028 kB' 'SUnreclaim: 419976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.493 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.494 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.495 node0=1024 expecting 1024 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.495 00:03:31.495 real 0m7.950s 00:03:31.495 user 0m3.074s 00:03:31.495 sys 0m5.008s 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.495 22:51:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.495 ************************************ 00:03:31.495 END TEST no_shrink_alloc 00:03:31.495 ************************************ 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.495 22:51:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.495 00:03:31.495 real 0m29.020s 00:03:31.495 user 0m11.511s 00:03:31.495 sys 0m17.926s 00:03:31.495 22:51:49 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.495 22:51:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.495 ************************************ 00:03:31.495 END TEST hugepages 00:03:31.495 ************************************ 00:03:31.495 22:51:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.495 22:51:49 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.495 22:51:49 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.495 22:51:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.495 ************************************ 00:03:31.495 START TEST driver 00:03:31.495 ************************************ 00:03:31.495 22:51:49 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.495 * Looking for test storage... 00:03:31.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.495 22:51:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.495 22:51:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.495 22:51:49 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.784 22:51:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.784 22:51:54 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.784 22:51:54 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.784 22:51:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.784 ************************************ 00:03:36.784 START TEST guess_driver 00:03:36.784 ************************************ 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.784 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.784 Looking for driver=vfio-pci 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.784 22:51:54 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.991 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.992 22:51:58 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.285 00:03:46.285 real 0m9.059s 00:03:46.285 user 0m3.059s 00:03:46.285 sys 0m5.261s 00:03:46.285 22:52:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.285 22:52:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.285 ************************************ 00:03:46.286 END TEST guess_driver 00:03:46.286 ************************************ 00:03:46.286 00:03:46.286 real 0m14.278s 00:03:46.286 user 0m4.656s 00:03:46.286 sys 0m8.128s 00:03:46.286 22:52:03 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.286 22:52:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.286 ************************************ 00:03:46.286 END TEST driver 00:03:46.286 ************************************ 00:03:46.286 22:52:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.286 22:52:03 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.286 22:52:03 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.286 22:52:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.286 ************************************ 00:03:46.286 START TEST devices 00:03:46.286 ************************************ 00:03:46.286 22:52:03 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.286 * Looking for test storage... 00:03:46.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.286 22:52:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.286 22:52:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.286 22:52:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.286 22:52:03 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.539 22:52:07 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.539 No valid GPT data, bailing 00:03:50.539 22:52:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.539 22:52:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.539 22:52:07 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.539 22:52:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.539 ************************************ 00:03:50.539 START TEST nvme_mount 00:03:50.539 ************************************ 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.539 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.111 Creating new GPT entries in memory. 00:03:51.111 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.111 other utilities. 00:03:51.111 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.111 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.111 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.111 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.111 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.496 Creating new GPT entries in memory. 00:03:52.496 The operation has completed successfully. 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 599418 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.496 22:52:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:55.801 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.061 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.062 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.062 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.062 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.062 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.322 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.322 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.322 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.322 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.322 22:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.527 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.528 22:52:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.827 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.827 00:04:03.827 real 0m13.775s 00:04:03.827 user 0m4.369s 00:04:03.827 sys 0m7.289s 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.827 22:52:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.827 ************************************ 00:04:03.827 END TEST nvme_mount 00:04:03.828 ************************************ 00:04:04.088 22:52:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:04.088 22:52:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.088 22:52:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.088 22:52:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:04.088 ************************************ 00:04:04.088 START TEST dm_mount 00:04:04.088 ************************************ 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.088 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.089 22:52:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:05.030 Creating new GPT entries in memory. 00:04:05.030 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.030 other utilities. 00:04:05.030 22:52:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.030 22:52:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.030 22:52:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.030 22:52:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.030 22:52:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.972 Creating new GPT entries in memory. 00:04:05.972 The operation has completed successfully. 00:04:05.972 22:52:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.972 22:52:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.972 22:52:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.972 22:52:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.972 22:52:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:07.355 The operation has completed successfully. 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 604976 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:07.355 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.356 22:52:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.659 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:10.920 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.921 22:52:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.149 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.149 00:04:15.149 real 0m10.804s 00:04:15.149 user 0m2.938s 00:04:15.149 sys 0m4.931s 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.149 22:52:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:15.149 ************************************ 00:04:15.149 END TEST dm_mount 00:04:15.149 ************************************ 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.149 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.149 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.149 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.149 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.149 22:52:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.149 00:04:15.149 real 0m29.318s 00:04:15.149 user 0m8.963s 00:04:15.149 sys 0m15.181s 00:04:15.149 22:52:32 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.149 22:52:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.149 ************************************ 00:04:15.149 END TEST devices 00:04:15.149 ************************************ 00:04:15.149 00:04:15.149 real 1m39.505s 00:04:15.149 user 0m33.776s 00:04:15.149 sys 0m57.153s 00:04:15.149 22:52:32 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.149 22:52:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.149 ************************************ 00:04:15.149 END TEST setup.sh 00:04:15.149 ************************************ 00:04:15.149 22:52:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:19.409 Hugepages 00:04:19.409 node hugesize free / total 00:04:19.409 node0 1048576kB 0 / 0 00:04:19.409 node0 2048kB 2048 / 2048 00:04:19.409 node1 1048576kB 0 / 0 00:04:19.409 node1 2048kB 0 / 0 00:04:19.409 00:04:19.409 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.409 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:19.409 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:19.410 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:19.410 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:19.410 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:19.410 22:52:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:19.410 22:52:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:19.410 22:52:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:19.410 22:52:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.614 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:23.615 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:24.998 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:24.998 22:52:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:25.939 22:52:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:25.939 22:52:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:25.939 22:52:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.939 22:52:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:25.939 22:52:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:25.939 22:52:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:25.939 22:52:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.939 22:52:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.939 22:52:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:25.939 22:52:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:25.939 22:52:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:25.939 22:52:43 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.147 Waiting for block devices as requested 00:04:30.147 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:30.147 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:30.408 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:30.408 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:30.408 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:30.668 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:30.668 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:30.668 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:30.668 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:30.927 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:30.927 22:52:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:30.927 22:52:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:30.927 22:52:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:30.927 22:52:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:30.927 22:52:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:30.927 22:52:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:30.927 22:52:48 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:30.927 22:52:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:30.927 22:52:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:30.927 22:52:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:30.927 22:52:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:30.927 22:52:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:30.927 22:52:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:30.927 22:52:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:30.927 22:52:48 -- common/autotest_common.sh@1557 -- # continue 00:04:30.927 22:52:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:30.927 22:52:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.927 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.927 22:52:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:30.927 22:52:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.927 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.927 22:52:48 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.128 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:35.128 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:35.128 22:52:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:35.128 22:52:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.128 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 22:52:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:35.128 22:52:52 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:35.128 22:52:52 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.128 22:52:52 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:35.128 22:52:52 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:35.128 22:52:52 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:35.128 22:52:52 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:35.128 22:52:52 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:35.128 22:52:52 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.128 22:52:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.128 22:52:52 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:35.128 22:52:52 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:35.128 22:52:52 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:35.128 22:52:52 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:35.128 22:52:52 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:35.128 22:52:52 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:35.128 22:52:52 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:35.128 22:52:52 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:35.128 22:52:52 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:35.128 22:52:52 -- common/autotest_common.sh@1593 -- # return 0 00:04:35.128 22:52:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:35.128 22:52:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:35.128 22:52:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.128 22:52:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.128 22:52:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:35.128 22:52:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.128 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 22:52:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:35.128 22:52:52 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.128 22:52:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.128 22:52:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.128 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 ************************************ 00:04:35.128 START TEST env 00:04:35.128 ************************************ 00:04:35.128 22:52:52 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.128 * Looking for test storage... 00:04:35.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:35.128 22:52:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.128 22:52:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.128 22:52:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.128 22:52:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 ************************************ 00:04:35.128 START TEST env_memory 00:04:35.128 ************************************ 00:04:35.128 22:52:52 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.128 00:04:35.128 00:04:35.128 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.128 http://cunit.sourceforge.net/ 00:04:35.128 00:04:35.128 00:04:35.128 Suite: memory 00:04:35.389 Test: alloc and free memory map ...[2024-07-24 22:52:52.939892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.389 passed 00:04:35.389 Test: mem map translation ...[2024-07-24 22:52:52.965261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.389 [2024-07-24 22:52:52.965281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.389 [2024-07-24 22:52:52.965326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.389 [2024-07-24 22:52:52.965333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.389 passed 00:04:35.389 Test: mem map registration ...[2024-07-24 22:52:53.020388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:35.389 [2024-07-24 22:52:53.020405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:35.389 passed 00:04:35.389 Test: mem map adjacent registrations ...passed 00:04:35.389 00:04:35.389 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.389 suites 1 1 n/a 0 0 00:04:35.389 tests 4 4 4 0 0 00:04:35.389 asserts 152 152 152 0 n/a 00:04:35.389 00:04:35.389 Elapsed time = 0.191 seconds 00:04:35.389 00:04:35.389 real 0m0.205s 00:04:35.389 user 0m0.193s 00:04:35.389 sys 0m0.011s 00:04:35.389 22:52:53 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.389 22:52:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.389 ************************************ 00:04:35.389 END TEST env_memory 00:04:35.389 ************************************ 00:04:35.389 22:52:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.389 22:52:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.389 22:52:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.389 22:52:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.389 ************************************ 00:04:35.389 START TEST env_vtophys 00:04:35.389 ************************************ 00:04:35.389 22:52:53 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.651 EAL: lib.eal log level changed from notice to debug 00:04:35.651 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.651 EAL: Detected lcore 1 as core 1 on socket 0 00:04:35.651 EAL: Detected lcore 2 as core 2 on socket 0 00:04:35.651 EAL: Detected lcore 3 as core 3 on socket 0 00:04:35.651 EAL: Detected lcore 4 as core 4 on socket 0 00:04:35.651 EAL: Detected lcore 5 as core 5 on socket 0 00:04:35.651 EAL: Detected lcore 6 as core 6 on socket 0 00:04:35.651 EAL: Detected lcore 7 as core 7 on socket 0 00:04:35.651 EAL: Detected lcore 8 as core 8 on socket 0 00:04:35.651 EAL: Detected lcore 9 as core 9 on socket 0 00:04:35.651 EAL: Detected lcore 10 as core 10 on socket 0 00:04:35.651 EAL: Detected lcore 11 as core 11 on socket 0 00:04:35.651 EAL: Detected lcore 12 as core 12 on socket 0 00:04:35.651 EAL: Detected lcore 13 as core 13 on socket 0 00:04:35.651 EAL: Detected lcore 14 as core 14 on socket 0 00:04:35.651 EAL: Detected lcore 15 as core 15 on socket 0 00:04:35.651 EAL: Detected lcore 16 as core 16 on socket 0 00:04:35.651 EAL: Detected lcore 17 as core 17 on socket 0 00:04:35.651 EAL: Detected lcore 18 as core 18 on socket 0 00:04:35.651 EAL: Detected lcore 19 as core 19 on socket 0 00:04:35.651 EAL: Detected lcore 20 as core 20 on socket 0 00:04:35.651 EAL: Detected lcore 21 as core 21 on socket 0 00:04:35.651 EAL: Detected lcore 22 as core 22 on socket 0 00:04:35.651 EAL: Detected lcore 23 as core 23 on socket 0 00:04:35.651 EAL: Detected lcore 24 as core 24 on socket 0 00:04:35.651 EAL: Detected lcore 25 as core 25 on socket 0 00:04:35.651 EAL: Detected lcore 26 as core 26 on socket 0 00:04:35.651 EAL: Detected lcore 27 as core 27 on socket 0 00:04:35.651 EAL: Detected lcore 28 as core 28 on socket 0 00:04:35.651 EAL: Detected lcore 29 as core 29 on socket 0 00:04:35.651 EAL: Detected lcore 30 as core 30 on socket 0 00:04:35.651 EAL: Detected lcore 31 as core 31 on socket 0 00:04:35.651 EAL: Detected lcore 32 as core 32 on socket 0 00:04:35.651 EAL: Detected lcore 33 as core 33 on socket 0 00:04:35.651 EAL: Detected lcore 34 as core 34 on socket 0 00:04:35.651 EAL: Detected lcore 35 as core 35 on socket 0 00:04:35.651 EAL: Detected lcore 36 as core 0 on socket 1 00:04:35.651 EAL: Detected lcore 37 as core 1 on socket 1 00:04:35.651 EAL: Detected lcore 38 as core 2 on socket 1 00:04:35.651 EAL: Detected lcore 39 as core 3 on socket 1 00:04:35.651 EAL: Detected lcore 40 as core 4 on socket 1 00:04:35.651 EAL: Detected lcore 41 as core 5 on socket 1 00:04:35.651 EAL: Detected lcore 42 as core 6 on socket 1 00:04:35.651 EAL: Detected lcore 43 as core 7 on socket 1 00:04:35.651 EAL: Detected lcore 44 as core 8 on socket 1 00:04:35.651 EAL: Detected lcore 45 as core 9 on socket 1 00:04:35.651 EAL: Detected lcore 46 as core 10 on socket 1 00:04:35.651 EAL: Detected lcore 47 as core 11 on socket 1 00:04:35.651 EAL: Detected lcore 48 as core 12 on socket 1 00:04:35.651 EAL: Detected lcore 49 as core 13 on socket 1 00:04:35.651 EAL: Detected lcore 50 as core 14 on socket 1 00:04:35.651 EAL: Detected lcore 51 as core 15 on socket 1 00:04:35.651 EAL: Detected lcore 52 as core 16 on socket 1 00:04:35.651 EAL: Detected lcore 53 as core 17 on socket 1 00:04:35.651 EAL: Detected lcore 54 as core 18 on socket 1 00:04:35.651 EAL: Detected lcore 55 as core 19 on socket 1 00:04:35.651 EAL: Detected lcore 56 as core 20 on socket 1 00:04:35.651 EAL: Detected lcore 57 as core 21 on socket 1 00:04:35.651 EAL: Detected lcore 58 as core 22 on socket 1 00:04:35.651 EAL: Detected lcore 59 as core 23 on socket 1 00:04:35.651 EAL: Detected lcore 60 as core 24 on socket 1 00:04:35.651 EAL: Detected lcore 61 as core 25 on socket 1 00:04:35.651 EAL: Detected lcore 62 as core 26 on socket 1 00:04:35.651 EAL: Detected lcore 63 as core 27 on socket 1 00:04:35.651 EAL: Detected lcore 64 as core 28 on socket 1 00:04:35.651 EAL: Detected lcore 65 as core 29 on socket 1 00:04:35.651 EAL: Detected lcore 66 as core 30 on socket 1 00:04:35.651 EAL: Detected lcore 67 as core 31 on socket 1 00:04:35.651 EAL: Detected lcore 68 as core 32 on socket 1 00:04:35.651 EAL: Detected lcore 69 as core 33 on socket 1 00:04:35.651 EAL: Detected lcore 70 as core 34 on socket 1 00:04:35.651 EAL: Detected lcore 71 as core 35 on socket 1 00:04:35.651 EAL: Detected lcore 72 as core 0 on socket 0 00:04:35.651 EAL: Detected lcore 73 as core 1 on socket 0 00:04:35.651 EAL: Detected lcore 74 as core 2 on socket 0 00:04:35.651 EAL: Detected lcore 75 as core 3 on socket 0 00:04:35.651 EAL: Detected lcore 76 as core 4 on socket 0 00:04:35.651 EAL: Detected lcore 77 as core 5 on socket 0 00:04:35.651 EAL: Detected lcore 78 as core 6 on socket 0 00:04:35.651 EAL: Detected lcore 79 as core 7 on socket 0 00:04:35.651 EAL: Detected lcore 80 as core 8 on socket 0 00:04:35.651 EAL: Detected lcore 81 as core 9 on socket 0 00:04:35.651 EAL: Detected lcore 82 as core 10 on socket 0 00:04:35.651 EAL: Detected lcore 83 as core 11 on socket 0 00:04:35.651 EAL: Detected lcore 84 as core 12 on socket 0 00:04:35.651 EAL: Detected lcore 85 as core 13 on socket 0 00:04:35.651 EAL: Detected lcore 86 as core 14 on socket 0 00:04:35.651 EAL: Detected lcore 87 as core 15 on socket 0 00:04:35.651 EAL: Detected lcore 88 as core 16 on socket 0 00:04:35.651 EAL: Detected lcore 89 as core 17 on socket 0 00:04:35.651 EAL: Detected lcore 90 as core 18 on socket 0 00:04:35.651 EAL: Detected lcore 91 as core 19 on socket 0 00:04:35.651 EAL: Detected lcore 92 as core 20 on socket 0 00:04:35.651 EAL: Detected lcore 93 as core 21 on socket 0 00:04:35.651 EAL: Detected lcore 94 as core 22 on socket 0 00:04:35.651 EAL: Detected lcore 95 as core 23 on socket 0 00:04:35.651 EAL: Detected lcore 96 as core 24 on socket 0 00:04:35.651 EAL: Detected lcore 97 as core 25 on socket 0 00:04:35.651 EAL: Detected lcore 98 as core 26 on socket 0 00:04:35.651 EAL: Detected lcore 99 as core 27 on socket 0 00:04:35.651 EAL: Detected lcore 100 as core 28 on socket 0 00:04:35.651 EAL: Detected lcore 101 as core 29 on socket 0 00:04:35.651 EAL: Detected lcore 102 as core 30 on socket 0 00:04:35.651 EAL: Detected lcore 103 as core 31 on socket 0 00:04:35.651 EAL: Detected lcore 104 as core 32 on socket 0 00:04:35.651 EAL: Detected lcore 105 as core 33 on socket 0 00:04:35.651 EAL: Detected lcore 106 as core 34 on socket 0 00:04:35.651 EAL: Detected lcore 107 as core 35 on socket 0 00:04:35.651 EAL: Detected lcore 108 as core 0 on socket 1 00:04:35.651 EAL: Detected lcore 109 as core 1 on socket 1 00:04:35.651 EAL: Detected lcore 110 as core 2 on socket 1 00:04:35.651 EAL: Detected lcore 111 as core 3 on socket 1 00:04:35.651 EAL: Detected lcore 112 as core 4 on socket 1 00:04:35.651 EAL: Detected lcore 113 as core 5 on socket 1 00:04:35.651 EAL: Detected lcore 114 as core 6 on socket 1 00:04:35.651 EAL: Detected lcore 115 as core 7 on socket 1 00:04:35.651 EAL: Detected lcore 116 as core 8 on socket 1 00:04:35.651 EAL: Detected lcore 117 as core 9 on socket 1 00:04:35.651 EAL: Detected lcore 118 as core 10 on socket 1 00:04:35.651 EAL: Detected lcore 119 as core 11 on socket 1 00:04:35.651 EAL: Detected lcore 120 as core 12 on socket 1 00:04:35.651 EAL: Detected lcore 121 as core 13 on socket 1 00:04:35.651 EAL: Detected lcore 122 as core 14 on socket 1 00:04:35.651 EAL: Detected lcore 123 as core 15 on socket 1 00:04:35.651 EAL: Detected lcore 124 as core 16 on socket 1 00:04:35.651 EAL: Detected lcore 125 as core 17 on socket 1 00:04:35.651 EAL: Detected lcore 126 as core 18 on socket 1 00:04:35.651 EAL: Detected lcore 127 as core 19 on socket 1 00:04:35.651 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:35.651 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:35.651 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:35.651 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:35.651 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:35.651 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:35.651 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:35.651 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:35.651 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:35.651 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:35.651 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:35.651 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:35.651 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:35.651 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:35.651 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:35.651 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:35.651 EAL: Maximum logical cores by configuration: 128 00:04:35.651 EAL: Detected CPU lcores: 128 00:04:35.651 EAL: Detected NUMA nodes: 2 00:04:35.651 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.651 EAL: Detected shared linkage of DPDK 00:04:35.651 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.651 EAL: Bus pci wants IOVA as 'DC' 00:04:35.651 EAL: Buses did not request a specific IOVA mode. 00:04:35.651 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:35.651 EAL: Selected IOVA mode 'VA' 00:04:35.651 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.651 EAL: Probing VFIO support... 00:04:35.651 EAL: IOMMU type 1 (Type 1) is supported 00:04:35.651 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:35.651 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:35.651 EAL: VFIO support initialized 00:04:35.651 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.651 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.651 EAL: Setting up physically contiguous memory... 00:04:35.651 EAL: Setting maximum number of open files to 524288 00:04:35.651 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.651 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:35.651 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.651 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.651 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.651 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.651 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.652 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:35.652 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.652 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:35.652 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.652 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.652 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:35.652 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:35.652 EAL: Hugepages will be freed exactly as allocated. 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: TSC frequency is ~2400000 KHz 00:04:35.652 EAL: Main lcore 0 is ready (tid=7f9fc0a6ea00;cpuset=[0]) 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 0 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.652 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.652 00:04:35.652 00:04:35.652 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.652 http://cunit.sourceforge.net/ 00:04:35.652 00:04:35.652 00:04:35.652 Suite: components_suite 00:04:35.652 Test: vtophys_malloc_test ...passed 00:04:35.652 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.652 EAL: Restoring previous memory policy: 4 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.652 EAL: request: mp_malloc_sync 00:04:35.652 EAL: No shared files mode enabled, IPC is disabled 00:04:35.652 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.652 EAL: Trying to obtain current memory policy. 00:04:35.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.912 EAL: Restoring previous memory policy: 4 00:04:35.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.912 EAL: request: mp_malloc_sync 00:04:35.912 EAL: No shared files mode enabled, IPC is disabled 00:04:35.912 EAL: Heap on socket 0 was expanded by 514MB 00:04:35.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.912 EAL: request: mp_malloc_sync 00:04:35.912 EAL: No shared files mode enabled, IPC is disabled 00:04:35.912 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.912 EAL: Trying to obtain current memory policy. 00:04:35.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.171 EAL: Restoring previous memory policy: 4 00:04:36.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.171 EAL: request: mp_malloc_sync 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.171 EAL: request: mp_malloc_sync 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.171 passed 00:04:36.171 00:04:36.171 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.171 suites 1 1 n/a 0 0 00:04:36.171 tests 2 2 2 0 0 00:04:36.171 asserts 497 497 497 0 n/a 00:04:36.171 00:04:36.171 Elapsed time = 0.642 seconds 00:04:36.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.171 EAL: request: mp_malloc_sync 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 EAL: No shared files mode enabled, IPC is disabled 00:04:36.171 00:04:36.171 real 0m0.762s 00:04:36.171 user 0m0.410s 00:04:36.171 sys 0m0.327s 00:04:36.171 22:52:53 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.171 22:52:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.171 ************************************ 00:04:36.171 END TEST env_vtophys 00:04:36.171 ************************************ 00:04:36.432 22:52:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.432 22:52:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.432 22:52:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.432 22:52:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 ************************************ 00:04:36.432 START TEST env_pci 00:04:36.432 ************************************ 00:04:36.432 22:52:54 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.432 00:04:36.432 00:04:36.432 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.432 http://cunit.sourceforge.net/ 00:04:36.432 00:04:36.432 00:04:36.432 Suite: pci 00:04:36.432 Test: pci_hook ...[2024-07-24 22:52:54.020656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 617086 has claimed it 00:04:36.432 EAL: Cannot find device (10000:00:01.0) 00:04:36.432 EAL: Failed to attach device on primary process 00:04:36.432 passed 00:04:36.432 00:04:36.432 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.432 suites 1 1 n/a 0 0 00:04:36.432 tests 1 1 1 0 0 00:04:36.432 asserts 25 25 25 0 n/a 00:04:36.432 00:04:36.432 Elapsed time = 0.032 seconds 00:04:36.432 00:04:36.432 real 0m0.052s 00:04:36.432 user 0m0.016s 00:04:36.432 sys 0m0.036s 00:04:36.432 22:52:54 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.432 22:52:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 ************************************ 00:04:36.432 END TEST env_pci 00:04:36.432 ************************************ 00:04:36.432 22:52:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.432 22:52:54 env -- env/env.sh@15 -- # uname 00:04:36.432 22:52:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.432 22:52:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.432 22:52:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.432 22:52:54 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:36.432 22:52:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.432 22:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 ************************************ 00:04:36.432 START TEST env_dpdk_post_init 00:04:36.432 ************************************ 00:04:36.432 22:52:54 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.432 EAL: Detected CPU lcores: 128 00:04:36.432 EAL: Detected NUMA nodes: 2 00:04:36.432 EAL: Detected shared linkage of DPDK 00:04:36.432 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.432 EAL: Selected IOVA mode 'VA' 00:04:36.432 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.432 EAL: VFIO support initialized 00:04:36.432 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.693 EAL: Using IOMMU type 1 (Type 1) 00:04:36.693 EAL: Ignore mapping IO port bar(1) 00:04:36.953 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:36.953 EAL: Ignore mapping IO port bar(1) 00:04:36.953 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:37.214 EAL: Ignore mapping IO port bar(1) 00:04:37.214 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:37.474 EAL: Ignore mapping IO port bar(1) 00:04:37.474 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:37.735 EAL: Ignore mapping IO port bar(1) 00:04:37.735 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:37.735 EAL: Ignore mapping IO port bar(1) 00:04:37.996 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:37.996 EAL: Ignore mapping IO port bar(1) 00:04:38.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:38.257 EAL: Ignore mapping IO port bar(1) 00:04:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:38.517 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:38.779 EAL: Ignore mapping IO port bar(1) 00:04:38.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:39.039 EAL: Ignore mapping IO port bar(1) 00:04:39.039 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:39.300 EAL: Ignore mapping IO port bar(1) 00:04:39.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:39.300 EAL: Ignore mapping IO port bar(1) 00:04:39.560 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:39.560 EAL: Ignore mapping IO port bar(1) 00:04:39.821 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:39.821 EAL: Ignore mapping IO port bar(1) 00:04:40.082 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:40.082 EAL: Ignore mapping IO port bar(1) 00:04:40.082 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:40.342 EAL: Ignore mapping IO port bar(1) 00:04:40.342 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:40.342 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:40.342 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:40.603 Starting DPDK initialization... 00:04:40.603 Starting SPDK post initialization... 00:04:40.603 SPDK NVMe probe 00:04:40.603 Attaching to 0000:65:00.0 00:04:40.603 Attached to 0000:65:00.0 00:04:40.603 Cleaning up... 00:04:42.563 00:04:42.563 real 0m5.725s 00:04:42.563 user 0m0.186s 00:04:42.563 sys 0m0.082s 00:04:42.563 22:52:59 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.563 22:52:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 ************************************ 00:04:42.563 END TEST env_dpdk_post_init 00:04:42.563 ************************************ 00:04:42.563 22:52:59 env -- env/env.sh@26 -- # uname 00:04:42.563 22:52:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.563 22:52:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.563 22:52:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.563 22:52:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.563 22:52:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 ************************************ 00:04:42.563 START TEST env_mem_callbacks 00:04:42.563 ************************************ 00:04:42.563 22:52:59 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.563 EAL: Detected CPU lcores: 128 00:04:42.563 EAL: Detected NUMA nodes: 2 00:04:42.563 EAL: Detected shared linkage of DPDK 00:04:42.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.563 EAL: Selected IOVA mode 'VA' 00:04:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.563 EAL: VFIO support initialized 00:04:42.563 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.563 00:04:42.563 00:04:42.563 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.563 http://cunit.sourceforge.net/ 00:04:42.563 00:04:42.563 00:04:42.563 Suite: memory 00:04:42.563 Test: test ... 00:04:42.563 register 0x200000200000 2097152 00:04:42.563 malloc 3145728 00:04:42.563 register 0x200000400000 4194304 00:04:42.563 buf 0x200000500000 len 3145728 PASSED 00:04:42.563 malloc 64 00:04:42.563 buf 0x2000004fff40 len 64 PASSED 00:04:42.563 malloc 4194304 00:04:42.563 register 0x200000800000 6291456 00:04:42.563 buf 0x200000a00000 len 4194304 PASSED 00:04:42.563 free 0x200000500000 3145728 00:04:42.563 free 0x2000004fff40 64 00:04:42.563 unregister 0x200000400000 4194304 PASSED 00:04:42.563 free 0x200000a00000 4194304 00:04:42.563 unregister 0x200000800000 6291456 PASSED 00:04:42.563 malloc 8388608 00:04:42.563 register 0x200000400000 10485760 00:04:42.563 buf 0x200000600000 len 8388608 PASSED 00:04:42.563 free 0x200000600000 8388608 00:04:42.563 unregister 0x200000400000 10485760 PASSED 00:04:42.563 passed 00:04:42.563 00:04:42.563 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.563 suites 1 1 n/a 0 0 00:04:42.563 tests 1 1 1 0 0 00:04:42.563 asserts 15 15 15 0 n/a 00:04:42.563 00:04:42.563 Elapsed time = 0.004 seconds 00:04:42.563 00:04:42.563 real 0m0.061s 00:04:42.563 user 0m0.018s 00:04:42.563 sys 0m0.043s 00:04:42.563 22:53:00 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.563 22:53:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 ************************************ 00:04:42.563 END TEST env_mem_callbacks 00:04:42.563 ************************************ 00:04:42.563 00:04:42.563 real 0m7.297s 00:04:42.563 user 0m0.995s 00:04:42.563 sys 0m0.846s 00:04:42.563 22:53:00 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.563 22:53:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 ************************************ 00:04:42.563 END TEST env 00:04:42.563 ************************************ 00:04:42.563 22:53:00 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.563 22:53:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.563 22:53:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.563 22:53:00 -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 ************************************ 00:04:42.563 START TEST rpc 00:04:42.563 ************************************ 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.563 * Looking for test storage... 00:04:42.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.563 22:53:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=618535 00:04:42.563 22:53:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.563 22:53:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.563 22:53:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 618535 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@831 -- # '[' -z 618535 ']' 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.563 22:53:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.563 [2024-07-24 22:53:00.262595] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:42.563 [2024-07-24 22:53:00.262669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618535 ] 00:04:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.563 [2024-07-24 22:53:00.334958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.824 [2024-07-24 22:53:00.410058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.824 [2024-07-24 22:53:00.410097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 618535' to capture a snapshot of events at runtime. 00:04:42.824 [2024-07-24 22:53:00.410104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.824 [2024-07-24 22:53:00.410111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.824 [2024-07-24 22:53:00.410117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid618535 for offline analysis/debug. 00:04:42.824 [2024-07-24 22:53:00.410134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.395 22:53:01 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.395 22:53:01 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:43.395 22:53:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.395 22:53:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.395 22:53:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.395 22:53:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.395 22:53:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.395 22:53:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.395 22:53:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.395 ************************************ 00:04:43.395 START TEST rpc_integrity 00:04:43.395 ************************************ 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.395 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.395 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.395 { 00:04:43.395 "name": "Malloc0", 00:04:43.395 "aliases": [ 00:04:43.395 "ec2ac5a7-a1a0-4af4-8a7e-c664f797be53" 00:04:43.395 ], 00:04:43.395 "product_name": "Malloc disk", 00:04:43.395 "block_size": 512, 00:04:43.395 "num_blocks": 16384, 00:04:43.395 "uuid": "ec2ac5a7-a1a0-4af4-8a7e-c664f797be53", 00:04:43.395 "assigned_rate_limits": { 00:04:43.395 "rw_ios_per_sec": 0, 00:04:43.395 "rw_mbytes_per_sec": 0, 00:04:43.395 "r_mbytes_per_sec": 0, 00:04:43.395 "w_mbytes_per_sec": 0 00:04:43.395 }, 00:04:43.395 "claimed": false, 00:04:43.395 "zoned": false, 00:04:43.396 "supported_io_types": { 00:04:43.396 "read": true, 00:04:43.396 "write": true, 00:04:43.396 "unmap": true, 00:04:43.396 "flush": true, 00:04:43.396 "reset": true, 00:04:43.396 "nvme_admin": false, 00:04:43.396 "nvme_io": false, 00:04:43.396 "nvme_io_md": false, 00:04:43.396 "write_zeroes": true, 00:04:43.396 "zcopy": true, 00:04:43.396 "get_zone_info": false, 00:04:43.396 "zone_management": false, 00:04:43.396 "zone_append": false, 00:04:43.396 "compare": false, 00:04:43.396 "compare_and_write": false, 00:04:43.396 "abort": true, 00:04:43.396 "seek_hole": false, 00:04:43.396 "seek_data": false, 00:04:43.396 "copy": true, 00:04:43.396 "nvme_iov_md": false 00:04:43.396 }, 00:04:43.396 "memory_domains": [ 00:04:43.396 { 00:04:43.396 "dma_device_id": "system", 00:04:43.396 "dma_device_type": 1 00:04:43.396 }, 00:04:43.396 { 00:04:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.396 "dma_device_type": 2 00:04:43.396 } 00:04:43.396 ], 00:04:43.396 "driver_specific": {} 00:04:43.396 } 00:04:43.396 ]' 00:04:43.396 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 [2024-07-24 22:53:01.203336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.657 [2024-07-24 22:53:01.203367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.657 [2024-07-24 22:53:01.203383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa30a10 00:04:43.657 [2024-07-24 22:53:01.203390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.657 [2024-07-24 22:53:01.204712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.657 [2024-07-24 22:53:01.204732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.657 Passthru0 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.657 { 00:04:43.657 "name": "Malloc0", 00:04:43.657 "aliases": [ 00:04:43.657 "ec2ac5a7-a1a0-4af4-8a7e-c664f797be53" 00:04:43.657 ], 00:04:43.657 "product_name": "Malloc disk", 00:04:43.657 "block_size": 512, 00:04:43.657 "num_blocks": 16384, 00:04:43.657 "uuid": "ec2ac5a7-a1a0-4af4-8a7e-c664f797be53", 00:04:43.657 "assigned_rate_limits": { 00:04:43.657 "rw_ios_per_sec": 0, 00:04:43.657 "rw_mbytes_per_sec": 0, 00:04:43.657 "r_mbytes_per_sec": 0, 00:04:43.657 "w_mbytes_per_sec": 0 00:04:43.657 }, 00:04:43.657 "claimed": true, 00:04:43.657 "claim_type": "exclusive_write", 00:04:43.657 "zoned": false, 00:04:43.657 "supported_io_types": { 00:04:43.657 "read": true, 00:04:43.657 "write": true, 00:04:43.657 "unmap": true, 00:04:43.657 "flush": true, 00:04:43.657 "reset": true, 00:04:43.657 "nvme_admin": false, 00:04:43.657 "nvme_io": false, 00:04:43.657 "nvme_io_md": false, 00:04:43.657 "write_zeroes": true, 00:04:43.657 "zcopy": true, 00:04:43.657 "get_zone_info": false, 00:04:43.657 "zone_management": false, 00:04:43.657 "zone_append": false, 00:04:43.657 "compare": false, 00:04:43.657 "compare_and_write": false, 00:04:43.657 "abort": true, 00:04:43.657 "seek_hole": false, 00:04:43.657 "seek_data": false, 00:04:43.657 "copy": true, 00:04:43.657 "nvme_iov_md": false 00:04:43.657 }, 00:04:43.657 "memory_domains": [ 00:04:43.657 { 00:04:43.657 "dma_device_id": "system", 00:04:43.657 "dma_device_type": 1 00:04:43.657 }, 00:04:43.657 { 00:04:43.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.657 "dma_device_type": 2 00:04:43.657 } 00:04:43.657 ], 00:04:43.657 "driver_specific": {} 00:04:43.657 }, 00:04:43.657 { 00:04:43.657 "name": "Passthru0", 00:04:43.657 "aliases": [ 00:04:43.657 "06ada57c-0039-5bc5-9c46-9fd008b33c30" 00:04:43.657 ], 00:04:43.657 "product_name": "passthru", 00:04:43.657 "block_size": 512, 00:04:43.657 "num_blocks": 16384, 00:04:43.657 "uuid": "06ada57c-0039-5bc5-9c46-9fd008b33c30", 00:04:43.657 "assigned_rate_limits": { 00:04:43.657 "rw_ios_per_sec": 0, 00:04:43.657 "rw_mbytes_per_sec": 0, 00:04:43.657 "r_mbytes_per_sec": 0, 00:04:43.657 "w_mbytes_per_sec": 0 00:04:43.657 }, 00:04:43.657 "claimed": false, 00:04:43.657 "zoned": false, 00:04:43.657 "supported_io_types": { 00:04:43.657 "read": true, 00:04:43.657 "write": true, 00:04:43.657 "unmap": true, 00:04:43.657 "flush": true, 00:04:43.657 "reset": true, 00:04:43.657 "nvme_admin": false, 00:04:43.657 "nvme_io": false, 00:04:43.657 "nvme_io_md": false, 00:04:43.657 "write_zeroes": true, 00:04:43.657 "zcopy": true, 00:04:43.657 "get_zone_info": false, 00:04:43.657 "zone_management": false, 00:04:43.657 "zone_append": false, 00:04:43.657 "compare": false, 00:04:43.657 "compare_and_write": false, 00:04:43.657 "abort": true, 00:04:43.657 "seek_hole": false, 00:04:43.657 "seek_data": false, 00:04:43.657 "copy": true, 00:04:43.657 "nvme_iov_md": false 00:04:43.657 }, 00:04:43.657 "memory_domains": [ 00:04:43.657 { 00:04:43.657 "dma_device_id": "system", 00:04:43.657 "dma_device_type": 1 00:04:43.657 }, 00:04:43.657 { 00:04:43.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.657 "dma_device_type": 2 00:04:43.657 } 00:04:43.657 ], 00:04:43.657 "driver_specific": { 00:04:43.657 "passthru": { 00:04:43.657 "name": "Passthru0", 00:04:43.657 "base_bdev_name": "Malloc0" 00:04:43.657 } 00:04:43.657 } 00:04:43.657 } 00:04:43.657 ]' 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.657 22:53:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.657 00:04:43.657 real 0m0.297s 00:04:43.657 user 0m0.184s 00:04:43.657 sys 0m0.046s 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 ************************************ 00:04:43.657 END TEST rpc_integrity 00:04:43.657 ************************************ 00:04:43.657 22:53:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.657 22:53:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.657 22:53:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.657 22:53:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 ************************************ 00:04:43.657 START TEST rpc_plugins 00:04:43.657 ************************************ 00:04:43.657 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:43.657 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.657 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.657 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.918 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.918 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.918 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.918 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.919 { 00:04:43.919 "name": "Malloc1", 00:04:43.919 "aliases": [ 00:04:43.919 "cbed4bca-3766-4932-8033-4d7e932d4afe" 00:04:43.919 ], 00:04:43.919 "product_name": "Malloc disk", 00:04:43.919 "block_size": 4096, 00:04:43.919 "num_blocks": 256, 00:04:43.919 "uuid": "cbed4bca-3766-4932-8033-4d7e932d4afe", 00:04:43.919 "assigned_rate_limits": { 00:04:43.919 "rw_ios_per_sec": 0, 00:04:43.919 "rw_mbytes_per_sec": 0, 00:04:43.919 "r_mbytes_per_sec": 0, 00:04:43.919 "w_mbytes_per_sec": 0 00:04:43.919 }, 00:04:43.919 "claimed": false, 00:04:43.919 "zoned": false, 00:04:43.919 "supported_io_types": { 00:04:43.919 "read": true, 00:04:43.919 "write": true, 00:04:43.919 "unmap": true, 00:04:43.919 "flush": true, 00:04:43.919 "reset": true, 00:04:43.919 "nvme_admin": false, 00:04:43.919 "nvme_io": false, 00:04:43.919 "nvme_io_md": false, 00:04:43.919 "write_zeroes": true, 00:04:43.919 "zcopy": true, 00:04:43.919 "get_zone_info": false, 00:04:43.919 "zone_management": false, 00:04:43.919 "zone_append": false, 00:04:43.919 "compare": false, 00:04:43.919 "compare_and_write": false, 00:04:43.919 "abort": true, 00:04:43.919 "seek_hole": false, 00:04:43.919 "seek_data": false, 00:04:43.919 "copy": true, 00:04:43.919 "nvme_iov_md": false 00:04:43.919 }, 00:04:43.919 "memory_domains": [ 00:04:43.919 { 00:04:43.919 "dma_device_id": "system", 00:04:43.919 "dma_device_type": 1 00:04:43.919 }, 00:04:43.919 { 00:04:43.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.919 "dma_device_type": 2 00:04:43.919 } 00:04:43.919 ], 00:04:43.919 "driver_specific": {} 00:04:43.919 } 00:04:43.919 ]' 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.919 22:53:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.919 00:04:43.919 real 0m0.137s 00:04:43.919 user 0m0.089s 00:04:43.919 sys 0m0.017s 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.919 22:53:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 ************************************ 00:04:43.919 END TEST rpc_plugins 00:04:43.919 ************************************ 00:04:43.919 22:53:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.919 22:53:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.919 22:53:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.919 22:53:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 ************************************ 00:04:43.919 START TEST rpc_trace_cmd_test 00:04:43.919 ************************************ 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.919 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid618535", 00:04:43.919 "tpoint_group_mask": "0x8", 00:04:43.919 "iscsi_conn": { 00:04:43.919 "mask": "0x2", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "scsi": { 00:04:43.919 "mask": "0x4", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "bdev": { 00:04:43.919 "mask": "0x8", 00:04:43.919 "tpoint_mask": "0xffffffffffffffff" 00:04:43.919 }, 00:04:43.919 "nvmf_rdma": { 00:04:43.919 "mask": "0x10", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "nvmf_tcp": { 00:04:43.919 "mask": "0x20", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "ftl": { 00:04:43.919 "mask": "0x40", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "blobfs": { 00:04:43.919 "mask": "0x80", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "dsa": { 00:04:43.919 "mask": "0x200", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "thread": { 00:04:43.919 "mask": "0x400", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "nvme_pcie": { 00:04:43.919 "mask": "0x800", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "iaa": { 00:04:43.919 "mask": "0x1000", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "nvme_tcp": { 00:04:43.919 "mask": "0x2000", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "bdev_nvme": { 00:04:43.919 "mask": "0x4000", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 }, 00:04:43.919 "sock": { 00:04:43.919 "mask": "0x8000", 00:04:43.919 "tpoint_mask": "0x0" 00:04:43.919 } 00:04:43.919 }' 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:43.919 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.180 00:04:44.180 real 0m0.241s 00:04:44.180 user 0m0.200s 00:04:44.180 sys 0m0.032s 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.180 22:53:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.180 ************************************ 00:04:44.180 END TEST rpc_trace_cmd_test 00:04:44.180 ************************************ 00:04:44.180 22:53:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.180 22:53:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.180 22:53:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.180 22:53:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.180 22:53:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.180 22:53:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.180 ************************************ 00:04:44.180 START TEST rpc_daemon_integrity 00:04:44.180 ************************************ 00:04:44.180 22:53:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:44.180 22:53:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.180 22:53:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.180 22:53:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.441 22:53:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.441 { 00:04:44.441 "name": "Malloc2", 00:04:44.441 "aliases": [ 00:04:44.441 "eb9cd2cb-a913-4aaa-9e1d-8bb08a794673" 00:04:44.441 ], 00:04:44.441 "product_name": "Malloc disk", 00:04:44.441 "block_size": 512, 00:04:44.441 "num_blocks": 16384, 00:04:44.441 "uuid": "eb9cd2cb-a913-4aaa-9e1d-8bb08a794673", 00:04:44.441 "assigned_rate_limits": { 00:04:44.441 "rw_ios_per_sec": 0, 00:04:44.441 "rw_mbytes_per_sec": 0, 00:04:44.441 "r_mbytes_per_sec": 0, 00:04:44.441 "w_mbytes_per_sec": 0 00:04:44.441 }, 00:04:44.441 "claimed": false, 00:04:44.441 "zoned": false, 00:04:44.441 "supported_io_types": { 00:04:44.441 "read": true, 00:04:44.441 "write": true, 00:04:44.441 "unmap": true, 00:04:44.441 "flush": true, 00:04:44.441 "reset": true, 00:04:44.441 "nvme_admin": false, 00:04:44.441 "nvme_io": false, 00:04:44.441 "nvme_io_md": false, 00:04:44.441 "write_zeroes": true, 00:04:44.441 "zcopy": true, 00:04:44.441 "get_zone_info": false, 00:04:44.441 "zone_management": false, 00:04:44.441 "zone_append": false, 00:04:44.441 "compare": false, 00:04:44.441 "compare_and_write": false, 00:04:44.441 "abort": true, 00:04:44.441 "seek_hole": false, 00:04:44.441 "seek_data": false, 00:04:44.441 "copy": true, 00:04:44.441 "nvme_iov_md": false 00:04:44.441 }, 00:04:44.441 "memory_domains": [ 00:04:44.441 { 00:04:44.441 "dma_device_id": "system", 00:04:44.441 "dma_device_type": 1 00:04:44.441 }, 00:04:44.441 { 00:04:44.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.441 "dma_device_type": 2 00:04:44.441 } 00:04:44.441 ], 00:04:44.441 "driver_specific": {} 00:04:44.441 } 00:04:44.441 ]' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 [2024-07-24 22:53:02.097773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.441 [2024-07-24 22:53:02.097801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.441 [2024-07-24 22:53:02.097816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc7fe0 00:04:44.441 [2024-07-24 22:53:02.097823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.441 [2024-07-24 22:53:02.099033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.441 [2024-07-24 22:53:02.099052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.441 Passthru0 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.441 { 00:04:44.441 "name": "Malloc2", 00:04:44.441 "aliases": [ 00:04:44.441 "eb9cd2cb-a913-4aaa-9e1d-8bb08a794673" 00:04:44.441 ], 00:04:44.441 "product_name": "Malloc disk", 00:04:44.441 "block_size": 512, 00:04:44.441 "num_blocks": 16384, 00:04:44.441 "uuid": "eb9cd2cb-a913-4aaa-9e1d-8bb08a794673", 00:04:44.441 "assigned_rate_limits": { 00:04:44.441 "rw_ios_per_sec": 0, 00:04:44.441 "rw_mbytes_per_sec": 0, 00:04:44.441 "r_mbytes_per_sec": 0, 00:04:44.441 "w_mbytes_per_sec": 0 00:04:44.441 }, 00:04:44.441 "claimed": true, 00:04:44.441 "claim_type": "exclusive_write", 00:04:44.441 "zoned": false, 00:04:44.441 "supported_io_types": { 00:04:44.441 "read": true, 00:04:44.441 "write": true, 00:04:44.441 "unmap": true, 00:04:44.441 "flush": true, 00:04:44.441 "reset": true, 00:04:44.441 "nvme_admin": false, 00:04:44.441 "nvme_io": false, 00:04:44.441 "nvme_io_md": false, 00:04:44.441 "write_zeroes": true, 00:04:44.441 "zcopy": true, 00:04:44.441 "get_zone_info": false, 00:04:44.441 "zone_management": false, 00:04:44.441 "zone_append": false, 00:04:44.441 "compare": false, 00:04:44.441 "compare_and_write": false, 00:04:44.441 "abort": true, 00:04:44.441 "seek_hole": false, 00:04:44.441 "seek_data": false, 00:04:44.441 "copy": true, 00:04:44.441 "nvme_iov_md": false 00:04:44.441 }, 00:04:44.441 "memory_domains": [ 00:04:44.441 { 00:04:44.441 "dma_device_id": "system", 00:04:44.441 "dma_device_type": 1 00:04:44.441 }, 00:04:44.441 { 00:04:44.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.441 "dma_device_type": 2 00:04:44.441 } 00:04:44.441 ], 00:04:44.441 "driver_specific": {} 00:04:44.441 }, 00:04:44.441 { 00:04:44.441 "name": "Passthru0", 00:04:44.441 "aliases": [ 00:04:44.441 "3c0a6780-a2fa-5a7c-917c-1d7bd1f39945" 00:04:44.441 ], 00:04:44.441 "product_name": "passthru", 00:04:44.441 "block_size": 512, 00:04:44.441 "num_blocks": 16384, 00:04:44.441 "uuid": "3c0a6780-a2fa-5a7c-917c-1d7bd1f39945", 00:04:44.441 "assigned_rate_limits": { 00:04:44.441 "rw_ios_per_sec": 0, 00:04:44.441 "rw_mbytes_per_sec": 0, 00:04:44.441 "r_mbytes_per_sec": 0, 00:04:44.441 "w_mbytes_per_sec": 0 00:04:44.441 }, 00:04:44.441 "claimed": false, 00:04:44.441 "zoned": false, 00:04:44.441 "supported_io_types": { 00:04:44.441 "read": true, 00:04:44.441 "write": true, 00:04:44.441 "unmap": true, 00:04:44.441 "flush": true, 00:04:44.441 "reset": true, 00:04:44.441 "nvme_admin": false, 00:04:44.441 "nvme_io": false, 00:04:44.441 "nvme_io_md": false, 00:04:44.441 "write_zeroes": true, 00:04:44.441 "zcopy": true, 00:04:44.441 "get_zone_info": false, 00:04:44.441 "zone_management": false, 00:04:44.441 "zone_append": false, 00:04:44.441 "compare": false, 00:04:44.441 "compare_and_write": false, 00:04:44.441 "abort": true, 00:04:44.441 "seek_hole": false, 00:04:44.441 "seek_data": false, 00:04:44.441 "copy": true, 00:04:44.441 "nvme_iov_md": false 00:04:44.441 }, 00:04:44.441 "memory_domains": [ 00:04:44.441 { 00:04:44.441 "dma_device_id": "system", 00:04:44.441 "dma_device_type": 1 00:04:44.441 }, 00:04:44.441 { 00:04:44.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.441 "dma_device_type": 2 00:04:44.441 } 00:04:44.441 ], 00:04:44.441 "driver_specific": { 00:04:44.441 "passthru": { 00:04:44.441 "name": "Passthru0", 00:04:44.441 "base_bdev_name": "Malloc2" 00:04:44.441 } 00:04:44.441 } 00:04:44.441 } 00:04:44.441 ]' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.441 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.702 22:53:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.702 00:04:44.702 real 0m0.299s 00:04:44.702 user 0m0.189s 00:04:44.702 sys 0m0.039s 00:04:44.702 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.702 22:53:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.702 ************************************ 00:04:44.702 END TEST rpc_daemon_integrity 00:04:44.702 ************************************ 00:04:44.702 22:53:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.702 22:53:02 rpc -- rpc/rpc.sh@84 -- # killprocess 618535 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@950 -- # '[' -z 618535 ']' 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@954 -- # kill -0 618535 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@955 -- # uname 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 618535 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 618535' 00:04:44.702 killing process with pid 618535 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@969 -- # kill 618535 00:04:44.702 22:53:02 rpc -- common/autotest_common.sh@974 -- # wait 618535 00:04:44.964 00:04:44.964 real 0m2.449s 00:04:44.964 user 0m3.210s 00:04:44.964 sys 0m0.700s 00:04:44.964 22:53:02 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.964 22:53:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.964 ************************************ 00:04:44.964 END TEST rpc 00:04:44.964 ************************************ 00:04:44.964 22:53:02 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.964 22:53:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.964 22:53:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.964 22:53:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.964 ************************************ 00:04:44.964 START TEST skip_rpc 00:04:44.964 ************************************ 00:04:44.964 22:53:02 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.964 * Looking for test storage... 00:04:44.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.964 22:53:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.964 22:53:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:44.964 22:53:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:44.964 22:53:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.964 22:53:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.964 22:53:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.225 ************************************ 00:04:45.225 START TEST skip_rpc 00:04:45.225 ************************************ 00:04:45.225 22:53:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:45.225 22:53:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=619144 00:04:45.225 22:53:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.225 22:53:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.225 22:53:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.225 [2024-07-24 22:53:02.827864] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:45.225 [2024-07-24 22:53:02.827936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619144 ] 00:04:45.225 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.225 [2024-07-24 22:53:02.901057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.225 [2024-07-24 22:53:02.976333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 619144 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 619144 ']' 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 619144 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 619144 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 619144' 00:04:50.510 killing process with pid 619144 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 619144 00:04:50.510 22:53:07 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 619144 00:04:50.510 00:04:50.510 real 0m5.287s 00:04:50.510 user 0m5.066s 00:04:50.510 sys 0m0.252s 00:04:50.510 22:53:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.510 22:53:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.510 ************************************ 00:04:50.510 END TEST skip_rpc 00:04:50.510 ************************************ 00:04:50.510 22:53:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.510 22:53:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.510 22:53:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.510 22:53:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.510 ************************************ 00:04:50.510 START TEST skip_rpc_with_json 00:04:50.510 ************************************ 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=620355 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 620355 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 620355 ']' 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.510 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.510 [2024-07-24 22:53:08.181668] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:50.510 [2024-07-24 22:53:08.181715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620355 ] 00:04:50.510 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.510 [2024-07-24 22:53:08.246160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.770 [2024-07-24 22:53:08.311132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.342 [2024-07-24 22:53:08.942319] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.342 request: 00:04:51.342 { 00:04:51.342 "trtype": "tcp", 00:04:51.342 "method": "nvmf_get_transports", 00:04:51.342 "req_id": 1 00:04:51.342 } 00:04:51.342 Got JSON-RPC error response 00:04:51.342 response: 00:04:51.342 { 00:04:51.342 "code": -19, 00:04:51.342 "message": "No such device" 00:04:51.342 } 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.342 [2024-07-24 22:53:08.954448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.342 22:53:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.342 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.342 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.342 { 00:04:51.342 "subsystems": [ 00:04:51.342 { 00:04:51.342 "subsystem": "vfio_user_target", 00:04:51.342 "config": null 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "keyring", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "iobuf", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "iobuf_set_options", 00:04:51.342 "params": { 00:04:51.342 "small_pool_count": 8192, 00:04:51.342 "large_pool_count": 1024, 00:04:51.342 "small_bufsize": 8192, 00:04:51.342 "large_bufsize": 135168 00:04:51.342 } 00:04:51.342 } 00:04:51.342 ] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "sock", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "sock_set_default_impl", 00:04:51.342 "params": { 00:04:51.342 "impl_name": "posix" 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "sock_impl_set_options", 00:04:51.342 "params": { 00:04:51.342 "impl_name": "ssl", 00:04:51.342 "recv_buf_size": 4096, 00:04:51.342 "send_buf_size": 4096, 00:04:51.342 "enable_recv_pipe": true, 00:04:51.342 "enable_quickack": false, 00:04:51.342 "enable_placement_id": 0, 00:04:51.342 "enable_zerocopy_send_server": true, 00:04:51.342 "enable_zerocopy_send_client": false, 00:04:51.342 "zerocopy_threshold": 0, 00:04:51.342 "tls_version": 0, 00:04:51.342 "enable_ktls": false 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "sock_impl_set_options", 00:04:51.342 "params": { 00:04:51.342 "impl_name": "posix", 00:04:51.342 "recv_buf_size": 2097152, 00:04:51.342 "send_buf_size": 2097152, 00:04:51.342 "enable_recv_pipe": true, 00:04:51.342 "enable_quickack": false, 00:04:51.342 "enable_placement_id": 0, 00:04:51.342 "enable_zerocopy_send_server": true, 00:04:51.342 "enable_zerocopy_send_client": false, 00:04:51.342 "zerocopy_threshold": 0, 00:04:51.342 "tls_version": 0, 00:04:51.342 "enable_ktls": false 00:04:51.342 } 00:04:51.342 } 00:04:51.342 ] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "vmd", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "accel", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "accel_set_options", 00:04:51.342 "params": { 00:04:51.342 "small_cache_size": 128, 00:04:51.342 "large_cache_size": 16, 00:04:51.342 "task_count": 2048, 00:04:51.342 "sequence_count": 2048, 00:04:51.342 "buf_count": 2048 00:04:51.342 } 00:04:51.342 } 00:04:51.342 ] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "bdev", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "bdev_set_options", 00:04:51.342 "params": { 00:04:51.342 "bdev_io_pool_size": 65535, 00:04:51.342 "bdev_io_cache_size": 256, 00:04:51.342 "bdev_auto_examine": true, 00:04:51.342 "iobuf_small_cache_size": 128, 00:04:51.342 "iobuf_large_cache_size": 16 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "bdev_raid_set_options", 00:04:51.342 "params": { 00:04:51.342 "process_window_size_kb": 1024, 00:04:51.342 "process_max_bandwidth_mb_sec": 0 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "bdev_iscsi_set_options", 00:04:51.342 "params": { 00:04:51.342 "timeout_sec": 30 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "bdev_nvme_set_options", 00:04:51.342 "params": { 00:04:51.342 "action_on_timeout": "none", 00:04:51.342 "timeout_us": 0, 00:04:51.342 "timeout_admin_us": 0, 00:04:51.342 "keep_alive_timeout_ms": 10000, 00:04:51.342 "arbitration_burst": 0, 00:04:51.342 "low_priority_weight": 0, 00:04:51.342 "medium_priority_weight": 0, 00:04:51.342 "high_priority_weight": 0, 00:04:51.342 "nvme_adminq_poll_period_us": 10000, 00:04:51.342 "nvme_ioq_poll_period_us": 0, 00:04:51.342 "io_queue_requests": 0, 00:04:51.342 "delay_cmd_submit": true, 00:04:51.342 "transport_retry_count": 4, 00:04:51.342 "bdev_retry_count": 3, 00:04:51.342 "transport_ack_timeout": 0, 00:04:51.342 "ctrlr_loss_timeout_sec": 0, 00:04:51.342 "reconnect_delay_sec": 0, 00:04:51.342 "fast_io_fail_timeout_sec": 0, 00:04:51.342 "disable_auto_failback": false, 00:04:51.342 "generate_uuids": false, 00:04:51.342 "transport_tos": 0, 00:04:51.342 "nvme_error_stat": false, 00:04:51.342 "rdma_srq_size": 0, 00:04:51.342 "io_path_stat": false, 00:04:51.342 "allow_accel_sequence": false, 00:04:51.342 "rdma_max_cq_size": 0, 00:04:51.342 "rdma_cm_event_timeout_ms": 0, 00:04:51.342 "dhchap_digests": [ 00:04:51.342 "sha256", 00:04:51.342 "sha384", 00:04:51.342 "sha512" 00:04:51.342 ], 00:04:51.342 "dhchap_dhgroups": [ 00:04:51.342 "null", 00:04:51.342 "ffdhe2048", 00:04:51.342 "ffdhe3072", 00:04:51.342 "ffdhe4096", 00:04:51.342 "ffdhe6144", 00:04:51.342 "ffdhe8192" 00:04:51.342 ] 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "bdev_nvme_set_hotplug", 00:04:51.342 "params": { 00:04:51.342 "period_us": 100000, 00:04:51.342 "enable": false 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "bdev_wait_for_examine" 00:04:51.342 } 00:04:51.342 ] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "scsi", 00:04:51.342 "config": null 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "scheduler", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "framework_set_scheduler", 00:04:51.342 "params": { 00:04:51.342 "name": "static" 00:04:51.342 } 00:04:51.342 } 00:04:51.342 ] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "vhost_scsi", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "vhost_blk", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "ublk", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "nbd", 00:04:51.342 "config": [] 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "subsystem": "nvmf", 00:04:51.342 "config": [ 00:04:51.342 { 00:04:51.342 "method": "nvmf_set_config", 00:04:51.342 "params": { 00:04:51.342 "discovery_filter": "match_any", 00:04:51.342 "admin_cmd_passthru": { 00:04:51.342 "identify_ctrlr": false 00:04:51.342 } 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "nvmf_set_max_subsystems", 00:04:51.342 "params": { 00:04:51.342 "max_subsystems": 1024 00:04:51.342 } 00:04:51.342 }, 00:04:51.342 { 00:04:51.342 "method": "nvmf_set_crdt", 00:04:51.343 "params": { 00:04:51.343 "crdt1": 0, 00:04:51.343 "crdt2": 0, 00:04:51.343 "crdt3": 0 00:04:51.343 } 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "method": "nvmf_create_transport", 00:04:51.343 "params": { 00:04:51.343 "trtype": "TCP", 00:04:51.343 "max_queue_depth": 128, 00:04:51.343 "max_io_qpairs_per_ctrlr": 127, 00:04:51.343 "in_capsule_data_size": 4096, 00:04:51.343 "max_io_size": 131072, 00:04:51.343 "io_unit_size": 131072, 00:04:51.343 "max_aq_depth": 128, 00:04:51.343 "num_shared_buffers": 511, 00:04:51.343 "buf_cache_size": 4294967295, 00:04:51.343 "dif_insert_or_strip": false, 00:04:51.343 "zcopy": false, 00:04:51.343 "c2h_success": true, 00:04:51.343 "sock_priority": 0, 00:04:51.343 "abort_timeout_sec": 1, 00:04:51.343 "ack_timeout": 0, 00:04:51.343 "data_wr_pool_size": 0 00:04:51.343 } 00:04:51.343 } 00:04:51.343 ] 00:04:51.343 }, 00:04:51.343 { 00:04:51.343 "subsystem": "iscsi", 00:04:51.343 "config": [ 00:04:51.343 { 00:04:51.343 "method": "iscsi_set_options", 00:04:51.343 "params": { 00:04:51.343 "node_base": "iqn.2016-06.io.spdk", 00:04:51.343 "max_sessions": 128, 00:04:51.343 "max_connections_per_session": 2, 00:04:51.343 "max_queue_depth": 64, 00:04:51.343 "default_time2wait": 2, 00:04:51.343 "default_time2retain": 20, 00:04:51.343 "first_burst_length": 8192, 00:04:51.343 "immediate_data": true, 00:04:51.343 "allow_duplicated_isid": false, 00:04:51.343 "error_recovery_level": 0, 00:04:51.343 "nop_timeout": 60, 00:04:51.343 "nop_in_interval": 30, 00:04:51.343 "disable_chap": false, 00:04:51.343 "require_chap": false, 00:04:51.343 "mutual_chap": false, 00:04:51.343 "chap_group": 0, 00:04:51.343 "max_large_datain_per_connection": 64, 00:04:51.343 "max_r2t_per_connection": 4, 00:04:51.343 "pdu_pool_size": 36864, 00:04:51.343 "immediate_data_pool_size": 16384, 00:04:51.343 "data_out_pool_size": 2048 00:04:51.343 } 00:04:51.343 } 00:04:51.343 ] 00:04:51.343 } 00:04:51.343 ] 00:04:51.343 } 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 620355 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 620355 ']' 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 620355 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:51.343 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620355 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620355' 00:04:51.603 killing process with pid 620355 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 620355 00:04:51.603 22:53:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 620355 00:04:51.864 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=620486 00:04:51.864 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.864 22:53:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 620486 ']' 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620486' 00:04:57.192 killing process with pid 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 620486 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.192 00:04:57.192 real 0m6.545s 00:04:57.192 user 0m6.418s 00:04:57.192 sys 0m0.523s 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.192 ************************************ 00:04:57.192 END TEST skip_rpc_with_json 00:04:57.192 ************************************ 00:04:57.192 22:53:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.192 ************************************ 00:04:57.192 START TEST skip_rpc_with_delay 00:04:57.192 ************************************ 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.192 [2024-07-24 22:53:14.805964] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.192 [2024-07-24 22:53:14.806054] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.192 00:04:57.192 real 0m0.074s 00:04:57.192 user 0m0.045s 00:04:57.192 sys 0m0.028s 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.192 22:53:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.192 ************************************ 00:04:57.192 END TEST skip_rpc_with_delay 00:04:57.192 ************************************ 00:04:57.192 22:53:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.192 22:53:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.192 22:53:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.192 22:53:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.192 ************************************ 00:04:57.192 START TEST exit_on_failed_rpc_init 00:04:57.192 ************************************ 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=621810 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 621810 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 621810 ']' 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.192 22:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.192 [2024-07-24 22:53:14.955789] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:57.192 [2024-07-24 22:53:14.955849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621810 ] 00:04:57.453 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.453 [2024-07-24 22:53:15.026921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.453 [2024-07-24 22:53:15.100895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.024 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.024 [2024-07-24 22:53:15.789105] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:58.024 [2024-07-24 22:53:15.789157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621837 ] 00:04:58.285 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.285 [2024-07-24 22:53:15.869667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.285 [2024-07-24 22:53:15.933249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.285 [2024-07-24 22:53:15.933307] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.285 [2024-07-24 22:53:15.933319] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.285 [2024-07-24 22:53:15.933326] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.285 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:58.285 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.285 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:58.285 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 621810 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 621810 ']' 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 621810 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:58.286 22:53:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621810 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621810' 00:04:58.286 killing process with pid 621810 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 621810 00:04:58.286 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 621810 00:04:58.546 00:04:58.546 real 0m1.356s 00:04:58.546 user 0m1.600s 00:04:58.546 sys 0m0.372s 00:04:58.546 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.546 22:53:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.546 ************************************ 00:04:58.546 END TEST exit_on_failed_rpc_init 00:04:58.546 ************************************ 00:04:58.546 22:53:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.546 00:04:58.546 real 0m13.662s 00:04:58.546 user 0m13.289s 00:04:58.546 sys 0m1.438s 00:04:58.546 22:53:16 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.546 22:53:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.546 ************************************ 00:04:58.546 END TEST skip_rpc 00:04:58.546 ************************************ 00:04:58.546 22:53:16 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.808 22:53:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.808 22:53:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.808 22:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 START TEST rpc_client 00:04:58.808 ************************************ 00:04:58.808 22:53:16 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.808 * Looking for test storage... 00:04:58.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:58.808 22:53:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:58.808 OK 00:04:58.808 22:53:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.808 00:04:58.808 real 0m0.124s 00:04:58.808 user 0m0.054s 00:04:58.808 sys 0m0.078s 00:04:58.808 22:53:16 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.808 22:53:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 END TEST rpc_client 00:04:58.808 ************************************ 00:04:58.808 22:53:16 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.808 22:53:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.808 22:53:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.808 22:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:58.808 ************************************ 00:04:58.808 START TEST json_config 00:04:58.808 ************************************ 00:04:58.808 22:53:16 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.070 22:53:16 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.070 22:53:16 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.070 22:53:16 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.070 22:53:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.070 22:53:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.070 22:53:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.070 22:53:16 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.070 22:53:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@47 -- # : 0 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.070 22:53:16 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:59.070 22:53:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:59.071 INFO: JSON configuration test init 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 22:53:16 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:59.071 22:53:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.071 22:53:16 json_config -- json_config/common.sh@10 -- # shift 00:04:59.071 22:53:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.071 22:53:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.071 22:53:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.071 22:53:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.071 22:53:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.071 22:53:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=622274 00:04:59.071 22:53:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.071 Waiting for target to run... 00:04:59.071 22:53:16 json_config -- json_config/common.sh@25 -- # waitforlisten 622274 /var/tmp/spdk_tgt.sock 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@831 -- # '[' -z 622274 ']' 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.071 22:53:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.071 22:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 [2024-07-24 22:53:16.745208] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:04:59.071 [2024-07-24 22:53:16.745281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622274 ] 00:04:59.071 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.332 [2024-07-24 22:53:17.062968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.592 [2024-07-24 22:53:17.120640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:59.853 22:53:17 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.853 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.853 22:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:59.853 22:53:17 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:59.853 22:53:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.425 22:53:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.425 22:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:00.425 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.425 22:53:18 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@51 -- # sort 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:00.687 22:53:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.687 22:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:00.687 22:53:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.687 22:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:00.687 22:53:18 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.687 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.687 MallocForNvmf0 00:05:00.948 22:53:18 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.948 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.948 MallocForNvmf1 00:05:00.948 22:53:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.948 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.209 [2024-07-24 22:53:18.794979] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.209 22:53:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.209 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.209 22:53:18 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.209 22:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.469 22:53:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.469 22:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.730 22:53:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.730 22:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.730 [2024-07-24 22:53:19.445176] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.730 22:53:19 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:01.730 22:53:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.730 22:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.730 22:53:19 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:01.730 22:53:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.730 22:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.990 22:53:19 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:01.990 22:53:19 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.990 22:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.990 MallocBdevForConfigChangeCheck 00:05:01.990 22:53:19 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:01.990 22:53:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.990 22:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.990 22:53:19 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:01.990 22:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.250 22:53:19 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:02.250 INFO: shutting down applications... 00:05:02.250 22:53:19 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:02.250 22:53:19 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:02.250 22:53:19 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:02.250 22:53:19 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.819 Calling clear_iscsi_subsystem 00:05:02.819 Calling clear_nvmf_subsystem 00:05:02.819 Calling clear_nbd_subsystem 00:05:02.819 Calling clear_ublk_subsystem 00:05:02.819 Calling clear_vhost_blk_subsystem 00:05:02.819 Calling clear_vhost_scsi_subsystem 00:05:02.819 Calling clear_bdev_subsystem 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.819 22:53:20 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.079 22:53:20 json_config -- json_config/json_config.sh@349 -- # break 00:05:03.079 22:53:20 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:03.079 22:53:20 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:03.079 22:53:20 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.079 22:53:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.079 22:53:20 json_config -- json_config/common.sh@35 -- # [[ -n 622274 ]] 00:05:03.079 22:53:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 622274 00:05:03.079 22:53:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.079 22:53:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.079 22:53:20 json_config -- json_config/common.sh@41 -- # kill -0 622274 00:05:03.079 22:53:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.651 22:53:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.651 22:53:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.651 22:53:21 json_config -- json_config/common.sh@41 -- # kill -0 622274 00:05:03.651 22:53:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.651 22:53:21 json_config -- json_config/common.sh@43 -- # break 00:05:03.651 22:53:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.651 22:53:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.651 SPDK target shutdown done 00:05:03.651 22:53:21 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:03.651 INFO: relaunching applications... 00:05:03.651 22:53:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.651 22:53:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.651 22:53:21 json_config -- json_config/common.sh@10 -- # shift 00:05:03.651 22:53:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.651 22:53:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.651 22:53:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.651 22:53:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.651 22:53:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.651 22:53:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=623224 00:05:03.651 22:53:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.651 Waiting for target to run... 00:05:03.651 22:53:21 json_config -- json_config/common.sh@25 -- # waitforlisten 623224 /var/tmp/spdk_tgt.sock 00:05:03.651 22:53:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@831 -- # '[' -z 623224 ']' 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.651 22:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.651 [2024-07-24 22:53:21.308213] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:03.651 [2024-07-24 22:53:21.308320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623224 ] 00:05:03.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.930 [2024-07-24 22:53:21.614157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.930 [2024-07-24 22:53:21.664243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.522 [2024-07-24 22:53:22.168821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.522 [2024-07-24 22:53:22.201343] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.522 22:53:22 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.522 22:53:22 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:04.522 22:53:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.522 00:05:04.522 22:53:22 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:04.522 22:53:22 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.522 INFO: Checking if target configuration is the same... 00:05:04.522 22:53:22 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.522 22:53:22 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:04.522 22:53:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.522 + '[' 2 -ne 2 ']' 00:05:04.522 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.522 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.522 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.522 +++ basename /dev/fd/62 00:05:04.522 ++ mktemp /tmp/62.XXX 00:05:04.522 + tmp_file_1=/tmp/62.Rsd 00:05:04.522 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.522 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.522 + tmp_file_2=/tmp/spdk_tgt_config.json.LjU 00:05:04.522 + ret=0 00:05:04.522 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.782 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.043 + diff -u /tmp/62.Rsd /tmp/spdk_tgt_config.json.LjU 00:05:05.043 + echo 'INFO: JSON config files are the same' 00:05:05.043 INFO: JSON config files are the same 00:05:05.043 + rm /tmp/62.Rsd /tmp/spdk_tgt_config.json.LjU 00:05:05.043 + exit 0 00:05:05.043 22:53:22 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:05.043 22:53:22 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.043 INFO: changing configuration and checking if this can be detected... 00:05:05.043 22:53:22 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.043 22:53:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.043 22:53:22 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.043 22:53:22 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:05.043 22:53:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.043 + '[' 2 -ne 2 ']' 00:05:05.043 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.043 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.043 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.043 +++ basename /dev/fd/62 00:05:05.043 ++ mktemp /tmp/62.XXX 00:05:05.043 + tmp_file_1=/tmp/62.0sM 00:05:05.043 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.043 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.043 + tmp_file_2=/tmp/spdk_tgt_config.json.cvy 00:05:05.043 + ret=0 00:05:05.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.303 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.563 + diff -u /tmp/62.0sM /tmp/spdk_tgt_config.json.cvy 00:05:05.563 + ret=1 00:05:05.563 + echo '=== Start of file: /tmp/62.0sM ===' 00:05:05.563 + cat /tmp/62.0sM 00:05:05.563 + echo '=== End of file: /tmp/62.0sM ===' 00:05:05.563 + echo '' 00:05:05.563 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cvy ===' 00:05:05.563 + cat /tmp/spdk_tgt_config.json.cvy 00:05:05.563 + echo '=== End of file: /tmp/spdk_tgt_config.json.cvy ===' 00:05:05.563 + echo '' 00:05:05.563 + rm /tmp/62.0sM /tmp/spdk_tgt_config.json.cvy 00:05:05.563 + exit 1 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:05.563 INFO: configuration change detected. 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:05.563 22:53:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.563 22:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@321 -- # [[ -n 623224 ]] 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.563 22:53:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.563 22:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:05.563 22:53:23 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:05.564 22:53:23 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:05.564 22:53:23 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 22:53:23 json_config -- json_config/json_config.sh@327 -- # killprocess 623224 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@950 -- # '[' -z 623224 ']' 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@954 -- # kill -0 623224 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@955 -- # uname 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623224 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623224' 00:05:05.564 killing process with pid 623224 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@969 -- # kill 623224 00:05:05.564 22:53:23 json_config -- common/autotest_common.sh@974 -- # wait 623224 00:05:05.822 22:53:23 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.822 22:53:23 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:05.822 22:53:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.822 22:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.822 22:53:23 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:05.822 22:53:23 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:05.822 INFO: Success 00:05:05.822 00:05:05.822 real 0m7.010s 00:05:05.822 user 0m8.451s 00:05:05.822 sys 0m1.771s 00:05:05.822 22:53:23 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.822 22:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.822 ************************************ 00:05:05.822 END TEST json_config 00:05:05.822 ************************************ 00:05:06.082 22:53:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.082 22:53:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.082 22:53:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.082 22:53:23 -- common/autotest_common.sh@10 -- # set +x 00:05:06.082 ************************************ 00:05:06.082 START TEST json_config_extra_key 00:05:06.082 ************************************ 00:05:06.082 22:53:23 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.082 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.082 22:53:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.082 22:53:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.082 22:53:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.082 22:53:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.082 22:53:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.082 22:53:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 22:53:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 22:53:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:06.083 22:53:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:06.083 22:53:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:06.083 INFO: launching applications... 00:05:06.083 22:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=623864 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.083 Waiting for target to run... 00:05:06.083 22:53:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 623864 /var/tmp/spdk_tgt.sock 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 623864 ']' 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.083 22:53:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.083 [2024-07-24 22:53:23.789828] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:06.083 [2024-07-24 22:53:23.789892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623864 ] 00:05:06.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.343 [2024-07-24 22:53:24.025822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.343 [2024-07-24 22:53:24.075314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.913 22:53:24 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.913 22:53:24 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.913 00:05:06.913 22:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.913 INFO: shutting down applications... 00:05:06.913 22:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 623864 ]] 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 623864 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 623864 00:05:06.913 22:53:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 623864 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.485 22:53:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.485 SPDK target shutdown done 00:05:07.485 22:53:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.485 Success 00:05:07.485 00:05:07.485 real 0m1.418s 00:05:07.485 user 0m1.109s 00:05:07.485 sys 0m0.309s 00:05:07.485 22:53:25 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.485 22:53:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.485 ************************************ 00:05:07.485 END TEST json_config_extra_key 00:05:07.485 ************************************ 00:05:07.485 22:53:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.485 22:53:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.485 22:53:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.485 22:53:25 -- common/autotest_common.sh@10 -- # set +x 00:05:07.485 ************************************ 00:05:07.485 START TEST alias_rpc 00:05:07.485 ************************************ 00:05:07.485 22:53:25 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.485 * Looking for test storage... 00:05:07.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:07.485 22:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.485 22:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=624247 00:05:07.486 22:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 624247 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 624247 ']' 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.486 22:53:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.486 22:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.486 [2024-07-24 22:53:25.268660] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:07.486 [2024-07-24 22:53:25.268732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624247 ] 00:05:07.746 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.746 [2024-07-24 22:53:25.341320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.746 [2024-07-24 22:53:25.416338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.316 22:53:26 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.316 22:53:26 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:08.316 22:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:08.575 22:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 624247 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 624247 ']' 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 624247 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 624247 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 624247' 00:05:08.576 killing process with pid 624247 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@969 -- # kill 624247 00:05:08.576 22:53:26 alias_rpc -- common/autotest_common.sh@974 -- # wait 624247 00:05:08.837 00:05:08.837 real 0m1.326s 00:05:08.837 user 0m1.451s 00:05:08.837 sys 0m0.358s 00:05:08.837 22:53:26 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.837 22:53:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.837 ************************************ 00:05:08.837 END TEST alias_rpc 00:05:08.837 ************************************ 00:05:08.837 22:53:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:08.837 22:53:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.837 22:53:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.837 22:53:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.837 22:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.837 ************************************ 00:05:08.837 START TEST spdkcli_tcp 00:05:08.837 ************************************ 00:05:08.837 22:53:26 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.837 * Looking for test storage... 00:05:09.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=624636 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 624636 00:05:09.098 22:53:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 624636 ']' 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.098 22:53:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.098 [2024-07-24 22:53:26.699785] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:09.098 [2024-07-24 22:53:26.699859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624636 ] 00:05:09.098 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.098 [2024-07-24 22:53:26.773411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.098 [2024-07-24 22:53:26.848443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.098 [2024-07-24 22:53:26.848445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.041 22:53:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.041 22:53:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:10.041 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.041 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=624648 00:05:10.041 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.041 [ 00:05:10.041 "bdev_malloc_delete", 00:05:10.041 "bdev_malloc_create", 00:05:10.041 "bdev_null_resize", 00:05:10.041 "bdev_null_delete", 00:05:10.041 "bdev_null_create", 00:05:10.041 "bdev_nvme_cuse_unregister", 00:05:10.041 "bdev_nvme_cuse_register", 00:05:10.041 "bdev_opal_new_user", 00:05:10.041 "bdev_opal_set_lock_state", 00:05:10.041 "bdev_opal_delete", 00:05:10.041 "bdev_opal_get_info", 00:05:10.041 "bdev_opal_create", 00:05:10.041 "bdev_nvme_opal_revert", 00:05:10.042 "bdev_nvme_opal_init", 00:05:10.042 "bdev_nvme_send_cmd", 00:05:10.042 "bdev_nvme_get_path_iostat", 00:05:10.042 "bdev_nvme_get_mdns_discovery_info", 00:05:10.042 "bdev_nvme_stop_mdns_discovery", 00:05:10.042 "bdev_nvme_start_mdns_discovery", 00:05:10.042 "bdev_nvme_set_multipath_policy", 00:05:10.042 "bdev_nvme_set_preferred_path", 00:05:10.042 "bdev_nvme_get_io_paths", 00:05:10.042 "bdev_nvme_remove_error_injection", 00:05:10.042 "bdev_nvme_add_error_injection", 00:05:10.042 "bdev_nvme_get_discovery_info", 00:05:10.042 "bdev_nvme_stop_discovery", 00:05:10.042 "bdev_nvme_start_discovery", 00:05:10.042 "bdev_nvme_get_controller_health_info", 00:05:10.042 "bdev_nvme_disable_controller", 00:05:10.042 "bdev_nvme_enable_controller", 00:05:10.042 "bdev_nvme_reset_controller", 00:05:10.042 "bdev_nvme_get_transport_statistics", 00:05:10.042 "bdev_nvme_apply_firmware", 00:05:10.042 "bdev_nvme_detach_controller", 00:05:10.042 "bdev_nvme_get_controllers", 00:05:10.042 "bdev_nvme_attach_controller", 00:05:10.042 "bdev_nvme_set_hotplug", 00:05:10.042 "bdev_nvme_set_options", 00:05:10.042 "bdev_passthru_delete", 00:05:10.042 "bdev_passthru_create", 00:05:10.042 "bdev_lvol_set_parent_bdev", 00:05:10.042 "bdev_lvol_set_parent", 00:05:10.042 "bdev_lvol_check_shallow_copy", 00:05:10.042 "bdev_lvol_start_shallow_copy", 00:05:10.042 "bdev_lvol_grow_lvstore", 00:05:10.042 "bdev_lvol_get_lvols", 00:05:10.042 "bdev_lvol_get_lvstores", 00:05:10.042 "bdev_lvol_delete", 00:05:10.042 "bdev_lvol_set_read_only", 00:05:10.042 "bdev_lvol_resize", 00:05:10.042 "bdev_lvol_decouple_parent", 00:05:10.042 "bdev_lvol_inflate", 00:05:10.042 "bdev_lvol_rename", 00:05:10.042 "bdev_lvol_clone_bdev", 00:05:10.042 "bdev_lvol_clone", 00:05:10.042 "bdev_lvol_snapshot", 00:05:10.042 "bdev_lvol_create", 00:05:10.042 "bdev_lvol_delete_lvstore", 00:05:10.042 "bdev_lvol_rename_lvstore", 00:05:10.042 "bdev_lvol_create_lvstore", 00:05:10.042 "bdev_raid_set_options", 00:05:10.042 "bdev_raid_remove_base_bdev", 00:05:10.042 "bdev_raid_add_base_bdev", 00:05:10.042 "bdev_raid_delete", 00:05:10.042 "bdev_raid_create", 00:05:10.042 "bdev_raid_get_bdevs", 00:05:10.042 "bdev_error_inject_error", 00:05:10.042 "bdev_error_delete", 00:05:10.042 "bdev_error_create", 00:05:10.042 "bdev_split_delete", 00:05:10.042 "bdev_split_create", 00:05:10.042 "bdev_delay_delete", 00:05:10.042 "bdev_delay_create", 00:05:10.042 "bdev_delay_update_latency", 00:05:10.042 "bdev_zone_block_delete", 00:05:10.042 "bdev_zone_block_create", 00:05:10.042 "blobfs_create", 00:05:10.042 "blobfs_detect", 00:05:10.042 "blobfs_set_cache_size", 00:05:10.042 "bdev_aio_delete", 00:05:10.042 "bdev_aio_rescan", 00:05:10.042 "bdev_aio_create", 00:05:10.042 "bdev_ftl_set_property", 00:05:10.042 "bdev_ftl_get_properties", 00:05:10.042 "bdev_ftl_get_stats", 00:05:10.042 "bdev_ftl_unmap", 00:05:10.042 "bdev_ftl_unload", 00:05:10.042 "bdev_ftl_delete", 00:05:10.042 "bdev_ftl_load", 00:05:10.042 "bdev_ftl_create", 00:05:10.042 "bdev_virtio_attach_controller", 00:05:10.042 "bdev_virtio_scsi_get_devices", 00:05:10.042 "bdev_virtio_detach_controller", 00:05:10.042 "bdev_virtio_blk_set_hotplug", 00:05:10.042 "bdev_iscsi_delete", 00:05:10.042 "bdev_iscsi_create", 00:05:10.042 "bdev_iscsi_set_options", 00:05:10.042 "accel_error_inject_error", 00:05:10.042 "ioat_scan_accel_module", 00:05:10.042 "dsa_scan_accel_module", 00:05:10.042 "iaa_scan_accel_module", 00:05:10.042 "vfu_virtio_create_scsi_endpoint", 00:05:10.042 "vfu_virtio_scsi_remove_target", 00:05:10.042 "vfu_virtio_scsi_add_target", 00:05:10.042 "vfu_virtio_create_blk_endpoint", 00:05:10.042 "vfu_virtio_delete_endpoint", 00:05:10.042 "keyring_file_remove_key", 00:05:10.042 "keyring_file_add_key", 00:05:10.042 "keyring_linux_set_options", 00:05:10.042 "iscsi_get_histogram", 00:05:10.042 "iscsi_enable_histogram", 00:05:10.042 "iscsi_set_options", 00:05:10.042 "iscsi_get_auth_groups", 00:05:10.042 "iscsi_auth_group_remove_secret", 00:05:10.042 "iscsi_auth_group_add_secret", 00:05:10.042 "iscsi_delete_auth_group", 00:05:10.042 "iscsi_create_auth_group", 00:05:10.042 "iscsi_set_discovery_auth", 00:05:10.042 "iscsi_get_options", 00:05:10.042 "iscsi_target_node_request_logout", 00:05:10.042 "iscsi_target_node_set_redirect", 00:05:10.042 "iscsi_target_node_set_auth", 00:05:10.042 "iscsi_target_node_add_lun", 00:05:10.042 "iscsi_get_stats", 00:05:10.042 "iscsi_get_connections", 00:05:10.042 "iscsi_portal_group_set_auth", 00:05:10.042 "iscsi_start_portal_group", 00:05:10.042 "iscsi_delete_portal_group", 00:05:10.042 "iscsi_create_portal_group", 00:05:10.042 "iscsi_get_portal_groups", 00:05:10.042 "iscsi_delete_target_node", 00:05:10.042 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.042 "iscsi_target_node_add_pg_ig_maps", 00:05:10.042 "iscsi_create_target_node", 00:05:10.042 "iscsi_get_target_nodes", 00:05:10.042 "iscsi_delete_initiator_group", 00:05:10.042 "iscsi_initiator_group_remove_initiators", 00:05:10.042 "iscsi_initiator_group_add_initiators", 00:05:10.042 "iscsi_create_initiator_group", 00:05:10.042 "iscsi_get_initiator_groups", 00:05:10.042 "nvmf_set_crdt", 00:05:10.042 "nvmf_set_config", 00:05:10.042 "nvmf_set_max_subsystems", 00:05:10.042 "nvmf_stop_mdns_prr", 00:05:10.042 "nvmf_publish_mdns_prr", 00:05:10.042 "nvmf_subsystem_get_listeners", 00:05:10.042 "nvmf_subsystem_get_qpairs", 00:05:10.042 "nvmf_subsystem_get_controllers", 00:05:10.042 "nvmf_get_stats", 00:05:10.042 "nvmf_get_transports", 00:05:10.042 "nvmf_create_transport", 00:05:10.042 "nvmf_get_targets", 00:05:10.042 "nvmf_delete_target", 00:05:10.042 "nvmf_create_target", 00:05:10.042 "nvmf_subsystem_allow_any_host", 00:05:10.042 "nvmf_subsystem_remove_host", 00:05:10.042 "nvmf_subsystem_add_host", 00:05:10.042 "nvmf_ns_remove_host", 00:05:10.042 "nvmf_ns_add_host", 00:05:10.042 "nvmf_subsystem_remove_ns", 00:05:10.042 "nvmf_subsystem_add_ns", 00:05:10.042 "nvmf_subsystem_listener_set_ana_state", 00:05:10.042 "nvmf_discovery_get_referrals", 00:05:10.042 "nvmf_discovery_remove_referral", 00:05:10.042 "nvmf_discovery_add_referral", 00:05:10.042 "nvmf_subsystem_remove_listener", 00:05:10.042 "nvmf_subsystem_add_listener", 00:05:10.042 "nvmf_delete_subsystem", 00:05:10.042 "nvmf_create_subsystem", 00:05:10.042 "nvmf_get_subsystems", 00:05:10.042 "env_dpdk_get_mem_stats", 00:05:10.042 "nbd_get_disks", 00:05:10.042 "nbd_stop_disk", 00:05:10.042 "nbd_start_disk", 00:05:10.042 "ublk_recover_disk", 00:05:10.042 "ublk_get_disks", 00:05:10.042 "ublk_stop_disk", 00:05:10.042 "ublk_start_disk", 00:05:10.042 "ublk_destroy_target", 00:05:10.042 "ublk_create_target", 00:05:10.042 "virtio_blk_create_transport", 00:05:10.042 "virtio_blk_get_transports", 00:05:10.042 "vhost_controller_set_coalescing", 00:05:10.042 "vhost_get_controllers", 00:05:10.042 "vhost_delete_controller", 00:05:10.042 "vhost_create_blk_controller", 00:05:10.042 "vhost_scsi_controller_remove_target", 00:05:10.042 "vhost_scsi_controller_add_target", 00:05:10.042 "vhost_start_scsi_controller", 00:05:10.042 "vhost_create_scsi_controller", 00:05:10.042 "thread_set_cpumask", 00:05:10.042 "framework_get_governor", 00:05:10.042 "framework_get_scheduler", 00:05:10.042 "framework_set_scheduler", 00:05:10.042 "framework_get_reactors", 00:05:10.042 "thread_get_io_channels", 00:05:10.042 "thread_get_pollers", 00:05:10.042 "thread_get_stats", 00:05:10.042 "framework_monitor_context_switch", 00:05:10.042 "spdk_kill_instance", 00:05:10.042 "log_enable_timestamps", 00:05:10.043 "log_get_flags", 00:05:10.043 "log_clear_flag", 00:05:10.043 "log_set_flag", 00:05:10.043 "log_get_level", 00:05:10.043 "log_set_level", 00:05:10.043 "log_get_print_level", 00:05:10.043 "log_set_print_level", 00:05:10.043 "framework_enable_cpumask_locks", 00:05:10.043 "framework_disable_cpumask_locks", 00:05:10.043 "framework_wait_init", 00:05:10.043 "framework_start_init", 00:05:10.043 "scsi_get_devices", 00:05:10.043 "bdev_get_histogram", 00:05:10.043 "bdev_enable_histogram", 00:05:10.043 "bdev_set_qos_limit", 00:05:10.043 "bdev_set_qd_sampling_period", 00:05:10.043 "bdev_get_bdevs", 00:05:10.043 "bdev_reset_iostat", 00:05:10.043 "bdev_get_iostat", 00:05:10.043 "bdev_examine", 00:05:10.043 "bdev_wait_for_examine", 00:05:10.043 "bdev_set_options", 00:05:10.043 "notify_get_notifications", 00:05:10.043 "notify_get_types", 00:05:10.043 "accel_get_stats", 00:05:10.043 "accel_set_options", 00:05:10.043 "accel_set_driver", 00:05:10.043 "accel_crypto_key_destroy", 00:05:10.043 "accel_crypto_keys_get", 00:05:10.043 "accel_crypto_key_create", 00:05:10.043 "accel_assign_opc", 00:05:10.043 "accel_get_module_info", 00:05:10.043 "accel_get_opc_assignments", 00:05:10.043 "vmd_rescan", 00:05:10.043 "vmd_remove_device", 00:05:10.043 "vmd_enable", 00:05:10.043 "sock_get_default_impl", 00:05:10.043 "sock_set_default_impl", 00:05:10.043 "sock_impl_set_options", 00:05:10.043 "sock_impl_get_options", 00:05:10.043 "iobuf_get_stats", 00:05:10.043 "iobuf_set_options", 00:05:10.043 "keyring_get_keys", 00:05:10.043 "framework_get_pci_devices", 00:05:10.043 "framework_get_config", 00:05:10.043 "framework_get_subsystems", 00:05:10.043 "vfu_tgt_set_base_path", 00:05:10.043 "trace_get_info", 00:05:10.043 "trace_get_tpoint_group_mask", 00:05:10.043 "trace_disable_tpoint_group", 00:05:10.043 "trace_enable_tpoint_group", 00:05:10.043 "trace_clear_tpoint_mask", 00:05:10.043 "trace_set_tpoint_mask", 00:05:10.043 "spdk_get_version", 00:05:10.043 "rpc_get_methods" 00:05:10.043 ] 00:05:10.043 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.043 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.043 22:53:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 624636 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 624636 ']' 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 624636 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 624636 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 624636' 00:05:10.043 killing process with pid 624636 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 624636 00:05:10.043 22:53:27 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 624636 00:05:10.355 00:05:10.355 real 0m1.413s 00:05:10.355 user 0m2.583s 00:05:10.355 sys 0m0.428s 00:05:10.355 22:53:27 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.355 22:53:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 ************************************ 00:05:10.355 END TEST spdkcli_tcp 00:05:10.355 ************************************ 00:05:10.355 22:53:27 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.355 22:53:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.355 22:53:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.355 22:53:27 -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 ************************************ 00:05:10.355 START TEST dpdk_mem_utility 00:05:10.355 ************************************ 00:05:10.355 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.356 * Looking for test storage... 00:05:10.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:10.356 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:10.356 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=624943 00:05:10.356 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 624943 00:05:10.356 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 624943 ']' 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.356 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.616 [2024-07-24 22:53:28.169396] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:10.616 [2024-07-24 22:53:28.169473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624943 ] 00:05:10.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.616 [2024-07-24 22:53:28.241625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.616 [2024-07-24 22:53:28.315912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.187 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.187 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:11.187 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.187 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.187 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.187 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.187 { 00:05:11.187 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.187 } 00:05:11.187 22:53:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.187 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.448 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.448 1 heaps totaling size 814.000000 MiB 00:05:11.448 size: 814.000000 MiB heap id: 0 00:05:11.448 end heaps---------- 00:05:11.448 8 mempools totaling size 598.116089 MiB 00:05:11.448 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.448 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.448 size: 84.521057 MiB name: bdev_io_624943 00:05:11.448 size: 51.011292 MiB name: evtpool_624943 00:05:11.448 size: 50.003479 MiB name: msgpool_624943 00:05:11.448 size: 21.763794 MiB name: PDU_Pool 00:05:11.448 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.448 size: 0.026123 MiB name: Session_Pool 00:05:11.448 end mempools------- 00:05:11.448 6 memzones totaling size 4.142822 MiB 00:05:11.448 size: 1.000366 MiB name: RG_ring_0_624943 00:05:11.448 size: 1.000366 MiB name: RG_ring_1_624943 00:05:11.448 size: 1.000366 MiB name: RG_ring_4_624943 00:05:11.448 size: 1.000366 MiB name: RG_ring_5_624943 00:05:11.448 size: 0.125366 MiB name: RG_ring_2_624943 00:05:11.448 size: 0.015991 MiB name: RG_ring_3_624943 00:05:11.448 end memzones------- 00:05:11.448 22:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.448 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:11.448 list of free elements. size: 12.519348 MiB 00:05:11.448 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.448 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.448 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.448 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.448 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.448 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.448 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.448 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.448 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:11.448 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:11.448 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:11.448 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:11.448 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.448 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:11.448 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:11.448 list of standard malloc elements. size: 199.218079 MiB 00:05:11.448 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.448 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.448 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.448 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.448 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.448 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.448 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.448 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.448 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.448 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.448 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.448 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.448 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.449 list of memzone associated elements. size: 602.262573 MiB 00:05:11.449 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.449 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.449 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.449 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.449 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.449 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_624943_0 00:05:11.449 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.449 associated memzone info: size: 48.002930 MiB name: MP_evtpool_624943_0 00:05:11.449 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.449 associated memzone info: size: 48.002930 MiB name: MP_msgpool_624943_0 00:05:11.449 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.449 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.449 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.449 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.449 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.449 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_624943 00:05:11.449 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.449 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_624943 00:05:11.449 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.449 associated memzone info: size: 1.007996 MiB name: MP_evtpool_624943 00:05:11.449 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.449 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.449 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.449 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.449 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.449 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.449 associated memzone info: size: 1.000366 MiB name: RG_ring_0_624943 00:05:11.449 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.449 associated memzone info: size: 1.000366 MiB name: RG_ring_1_624943 00:05:11.449 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.449 associated memzone info: size: 1.000366 MiB name: RG_ring_4_624943 00:05:11.449 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.449 associated memzone info: size: 1.000366 MiB name: RG_ring_5_624943 00:05:11.449 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:11.449 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_624943 00:05:11.449 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.449 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.449 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.449 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.449 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.449 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.449 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:11.449 associated memzone info: size: 0.125366 MiB name: RG_ring_2_624943 00:05:11.449 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.449 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.449 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:11.449 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.449 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:11.449 associated memzone info: size: 0.015991 MiB name: RG_ring_3_624943 00:05:11.449 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:11.449 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.449 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:11.449 associated memzone info: size: 0.000183 MiB name: MP_msgpool_624943 00:05:11.449 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:11.449 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_624943 00:05:11.449 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:11.449 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.449 22:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.449 22:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 624943 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 624943 ']' 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 624943 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 624943 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 624943' 00:05:11.449 killing process with pid 624943 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 624943 00:05:11.449 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 624943 00:05:11.710 00:05:11.710 real 0m1.272s 00:05:11.710 user 0m1.355s 00:05:11.710 sys 0m0.356s 00:05:11.710 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.710 22:53:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.710 ************************************ 00:05:11.710 END TEST dpdk_mem_utility 00:05:11.710 ************************************ 00:05:11.710 22:53:29 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.710 22:53:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.710 22:53:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.710 22:53:29 -- common/autotest_common.sh@10 -- # set +x 00:05:11.710 ************************************ 00:05:11.710 START TEST event 00:05:11.710 ************************************ 00:05:11.710 22:53:29 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.710 * Looking for test storage... 00:05:11.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.710 22:53:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:11.710 22:53:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:11.710 22:53:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.710 22:53:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:11.710 22:53:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.710 22:53:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.710 ************************************ 00:05:11.710 START TEST event_perf 00:05:11.710 ************************************ 00:05:11.710 22:53:29 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.971 Running I/O for 1 seconds...[2024-07-24 22:53:29.507910] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:11.971 [2024-07-24 22:53:29.508006] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625169 ] 00:05:11.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.971 [2024-07-24 22:53:29.582839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.971 [2024-07-24 22:53:29.660789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.971 [2024-07-24 22:53:29.660861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.971 [2024-07-24 22:53:29.661025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.971 Running I/O for 1 seconds...[2024-07-24 22:53:29.661025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.356 00:05:13.356 lcore 0: 178359 00:05:13.356 lcore 1: 178357 00:05:13.356 lcore 2: 178356 00:05:13.356 lcore 3: 178359 00:05:13.356 done. 00:05:13.356 00:05:13.356 real 0m1.230s 00:05:13.356 user 0m4.138s 00:05:13.356 sys 0m0.087s 00:05:13.356 22:53:30 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.356 22:53:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.356 ************************************ 00:05:13.356 END TEST event_perf 00:05:13.356 ************************************ 00:05:13.356 22:53:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.356 22:53:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:13.356 22:53:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.356 22:53:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.356 ************************************ 00:05:13.356 START TEST event_reactor 00:05:13.356 ************************************ 00:05:13.356 22:53:30 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.356 [2024-07-24 22:53:30.814498] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:13.356 [2024-07-24 22:53:30.814591] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625467 ] 00:05:13.356 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.356 [2024-07-24 22:53:30.886458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.356 [2024-07-24 22:53:30.951078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.297 test_start 00:05:14.297 oneshot 00:05:14.297 tick 100 00:05:14.297 tick 100 00:05:14.297 tick 250 00:05:14.297 tick 100 00:05:14.297 tick 100 00:05:14.297 tick 100 00:05:14.297 tick 250 00:05:14.297 tick 500 00:05:14.297 tick 100 00:05:14.297 tick 100 00:05:14.297 tick 250 00:05:14.297 tick 100 00:05:14.297 tick 100 00:05:14.297 test_end 00:05:14.297 00:05:14.297 real 0m1.211s 00:05:14.297 user 0m1.127s 00:05:14.297 sys 0m0.079s 00:05:14.297 22:53:32 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.297 22:53:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:14.297 ************************************ 00:05:14.297 END TEST event_reactor 00:05:14.297 ************************************ 00:05:14.297 22:53:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.297 22:53:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:14.297 22:53:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.297 22:53:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.297 ************************************ 00:05:14.297 START TEST event_reactor_perf 00:05:14.297 ************************************ 00:05:14.297 22:53:32 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.557 [2024-07-24 22:53:32.102128] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:14.557 [2024-07-24 22:53:32.102222] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625824 ] 00:05:14.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.557 [2024-07-24 22:53:32.171509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.557 [2024-07-24 22:53:32.235989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.940 test_start 00:05:15.940 test_end 00:05:15.940 Performance: 365883 events per second 00:05:15.940 00:05:15.940 real 0m1.209s 00:05:15.940 user 0m1.134s 00:05:15.940 sys 0m0.071s 00:05:15.940 22:53:33 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.940 22:53:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.940 ************************************ 00:05:15.940 END TEST event_reactor_perf 00:05:15.940 ************************************ 00:05:15.940 22:53:33 event -- event/event.sh@49 -- # uname -s 00:05:15.940 22:53:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.940 22:53:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.940 22:53:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.940 22:53:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.940 22:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.940 ************************************ 00:05:15.940 START TEST event_scheduler 00:05:15.940 ************************************ 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.940 * Looking for test storage... 00:05:15.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:15.940 22:53:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:15.940 22:53:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=626200 00:05:15.940 22:53:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.940 22:53:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 626200 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 626200 ']' 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.940 22:53:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.940 22:53:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:15.940 [2024-07-24 22:53:33.527552] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:15.940 [2024-07-24 22:53:33.527619] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626200 ] 00:05:15.940 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.940 [2024-07-24 22:53:33.587325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.940 [2024-07-24 22:53:33.646667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.940 [2024-07-24 22:53:33.646823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.940 [2024-07-24 22:53:33.646870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.941 [2024-07-24 22:53:33.646871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.511 22:53:34 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.511 22:53:34 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:16.511 22:53:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.511 22:53:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.511 22:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.511 [2024-07-24 22:53:34.296962] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:16.511 [2024-07-24 22:53:34.296976] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:16.511 [2024-07-24 22:53:34.296983] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.511 [2024-07-24 22:53:34.296988] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.511 [2024-07-24 22:53:34.296991] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.772 22:53:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.772 [2024-07-24 22:53:34.350541] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.772 22:53:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.772 22:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.772 ************************************ 00:05:16.772 START TEST scheduler_create_thread 00:05:16.772 ************************************ 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.772 2 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.772 3 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.772 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 4 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 5 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 6 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 7 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 8 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.773 9 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.773 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.344 10 00:05:17.344 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.344 22:53:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.344 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.344 22:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.730 22:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.730 22:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.730 22:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.730 22:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.730 22:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.301 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.562 22:53:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.562 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.562 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.134 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.134 22:53:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.134 22:53:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.134 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.134 22:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.076 22:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.076 00:05:21.076 real 0m4.223s 00:05:21.076 user 0m0.024s 00:05:21.076 sys 0m0.007s 00:05:21.076 22:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.076 22:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.076 ************************************ 00:05:21.076 END TEST scheduler_create_thread 00:05:21.076 ************************************ 00:05:21.076 22:53:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.076 22:53:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 626200 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 626200 ']' 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 626200 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 626200 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 626200' 00:05:21.076 killing process with pid 626200 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 626200 00:05:21.076 22:53:38 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 626200 00:05:21.337 [2024-07-24 22:53:38.891708] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:21.337 00:05:21.337 real 0m5.687s 00:05:21.337 user 0m12.698s 00:05:21.337 sys 0m0.323s 00:05:21.337 22:53:39 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.337 22:53:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.337 ************************************ 00:05:21.337 END TEST event_scheduler 00:05:21.337 ************************************ 00:05:21.337 22:53:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:21.337 22:53:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:21.337 22:53:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.337 22:53:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.337 22:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 ************************************ 00:05:21.598 START TEST app_repeat 00:05:21.598 ************************************ 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=627268 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 627268' 00:05:21.598 Process app_repeat pid: 627268 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:21.598 spdk_app_start Round 0 00:05:21.598 22:53:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 627268 /var/tmp/spdk-nbd.sock 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 627268 ']' 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.598 22:53:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.598 [2024-07-24 22:53:39.183128] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:21.598 [2024-07-24 22:53:39.183185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627268 ] 00:05:21.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.598 [2024-07-24 22:53:39.251099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.598 [2024-07-24 22:53:39.314833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.598 [2024-07-24 22:53:39.314962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.577 22:53:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.577 22:53:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.577 22:53:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.577 Malloc0 00:05:22.577 22:53:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.577 Malloc1 00:05:22.577 22:53:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.577 22:53:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.837 /dev/nbd0 00:05:22.837 22:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.838 1+0 records in 00:05:22.838 1+0 records out 00:05:22.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283912 s, 14.4 MB/s 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.838 /dev/nbd1 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.838 22:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.838 1+0 records in 00:05:22.838 1+0 records out 00:05:22.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001914 s, 21.4 MB/s 00:05:22.838 22:53:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.098 22:53:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.098 22:53:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.098 22:53:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.098 22:53:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.098 { 00:05:23.098 "nbd_device": "/dev/nbd0", 00:05:23.098 "bdev_name": "Malloc0" 00:05:23.098 }, 00:05:23.098 { 00:05:23.098 "nbd_device": "/dev/nbd1", 00:05:23.098 "bdev_name": "Malloc1" 00:05:23.098 } 00:05:23.098 ]' 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.098 { 00:05:23.098 "nbd_device": "/dev/nbd0", 00:05:23.098 "bdev_name": "Malloc0" 00:05:23.098 }, 00:05:23.098 { 00:05:23.098 "nbd_device": "/dev/nbd1", 00:05:23.098 "bdev_name": "Malloc1" 00:05:23.098 } 00:05:23.098 ]' 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.098 /dev/nbd1' 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.098 /dev/nbd1' 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.098 22:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.099 256+0 records in 00:05:23.099 256+0 records out 00:05:23.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114517 s, 91.6 MB/s 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.099 256+0 records in 00:05:23.099 256+0 records out 00:05:23.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158704 s, 66.1 MB/s 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.099 22:53:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.360 256+0 records in 00:05:23.360 256+0 records out 00:05:23.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347586 s, 30.2 MB/s 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.360 22:53:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.360 22:53:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.622 22:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.883 22:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.884 22:53:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.884 22:53:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.884 22:53:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.145 [2024-07-24 22:53:41.770658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.145 [2024-07-24 22:53:41.834984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.145 [2024-07-24 22:53:41.834986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.145 [2024-07-24 22:53:41.866150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.145 [2024-07-24 22:53:41.866186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.450 22:53:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.450 22:53:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:27.450 spdk_app_start Round 1 00:05:27.450 22:53:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 627268 /var/tmp/spdk-nbd.sock 00:05:27.450 22:53:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 627268 ']' 00:05:27.450 22:53:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.450 22:53:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.450 22:53:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.450 22:53:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.451 22:53:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.451 22:53:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.451 22:53:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.451 22:53:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.451 Malloc0 00:05:27.451 22:53:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.451 Malloc1 00:05:27.451 22:53:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.451 /dev/nbd0 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.451 22:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.451 1+0 records in 00:05:27.451 1+0 records out 00:05:27.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246882 s, 16.6 MB/s 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.451 22:53:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.712 /dev/nbd1 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.712 1+0 records in 00:05:27.712 1+0 records out 00:05:27.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234665 s, 17.5 MB/s 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.712 22:53:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.712 22:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.973 { 00:05:27.973 "nbd_device": "/dev/nbd0", 00:05:27.973 "bdev_name": "Malloc0" 00:05:27.973 }, 00:05:27.973 { 00:05:27.973 "nbd_device": "/dev/nbd1", 00:05:27.973 "bdev_name": "Malloc1" 00:05:27.973 } 00:05:27.973 ]' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.973 { 00:05:27.973 "nbd_device": "/dev/nbd0", 00:05:27.973 "bdev_name": "Malloc0" 00:05:27.973 }, 00:05:27.973 { 00:05:27.973 "nbd_device": "/dev/nbd1", 00:05:27.973 "bdev_name": "Malloc1" 00:05:27.973 } 00:05:27.973 ]' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.973 /dev/nbd1' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.973 /dev/nbd1' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.973 22:53:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.974 256+0 records in 00:05:27.974 256+0 records out 00:05:27.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119487 s, 87.8 MB/s 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.974 256+0 records in 00:05:27.974 256+0 records out 00:05:27.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177155 s, 59.2 MB/s 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.974 256+0 records in 00:05:27.974 256+0 records out 00:05:27.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179538 s, 58.4 MB/s 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.974 22:53:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.235 22:53:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.497 22:53:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.497 22:53:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.758 22:53:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.019 [2024-07-24 22:53:46.552917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.019 [2024-07-24 22:53:46.616412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.019 [2024-07-24 22:53:46.616415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.019 [2024-07-24 22:53:46.648432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.019 [2024-07-24 22:53:46.648469] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.325 spdk_app_start Round 2 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 627268 /var/tmp/spdk-nbd.sock 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 627268 ']' 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.325 22:53:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.325 Malloc0 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.325 Malloc1 00:05:32.325 22:53:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.325 22:53:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.325 /dev/nbd0 00:05:32.325 22:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.325 22:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.325 1+0 records in 00:05:32.325 1+0 records out 00:05:32.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302006 s, 13.6 MB/s 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.325 22:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.587 /dev/nbd1 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.587 1+0 records in 00:05:32.587 1+0 records out 00:05:32.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240312 s, 17.0 MB/s 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.587 22:53:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.587 22:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.848 { 00:05:32.848 "nbd_device": "/dev/nbd0", 00:05:32.848 "bdev_name": "Malloc0" 00:05:32.848 }, 00:05:32.848 { 00:05:32.848 "nbd_device": "/dev/nbd1", 00:05:32.848 "bdev_name": "Malloc1" 00:05:32.848 } 00:05:32.848 ]' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.848 { 00:05:32.848 "nbd_device": "/dev/nbd0", 00:05:32.848 "bdev_name": "Malloc0" 00:05:32.848 }, 00:05:32.848 { 00:05:32.848 "nbd_device": "/dev/nbd1", 00:05:32.848 "bdev_name": "Malloc1" 00:05:32.848 } 00:05:32.848 ]' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.848 /dev/nbd1' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.848 /dev/nbd1' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.848 256+0 records in 00:05:32.848 256+0 records out 00:05:32.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124762 s, 84.0 MB/s 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.848 22:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.848 256+0 records in 00:05:32.849 256+0 records out 00:05:32.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163729 s, 64.0 MB/s 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.849 256+0 records in 00:05:32.849 256+0 records out 00:05:32.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185913 s, 56.4 MB/s 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.849 22:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.110 22:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.372 22:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.372 22:53:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.372 22:53:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.634 22:53:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.895 [2024-07-24 22:53:51.421217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.895 [2024-07-24 22:53:51.485418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.895 [2024-07-24 22:53:51.485421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.895 [2024-07-24 22:53:51.516670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.895 [2024-07-24 22:53:51.516703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.196 22:53:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 627268 /var/tmp/spdk-nbd.sock 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 627268 ']' 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.196 22:53:54 event.app_repeat -- event/event.sh@39 -- # killprocess 627268 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 627268 ']' 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 627268 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627268 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627268' 00:05:37.196 killing process with pid 627268 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 627268 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 627268 00:05:37.196 spdk_app_start is called in Round 0. 00:05:37.196 Shutdown signal received, stop current app iteration 00:05:37.196 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:05:37.196 spdk_app_start is called in Round 1. 00:05:37.196 Shutdown signal received, stop current app iteration 00:05:37.196 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:05:37.196 spdk_app_start is called in Round 2. 00:05:37.196 Shutdown signal received, stop current app iteration 00:05:37.196 Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 reinitialization... 00:05:37.196 spdk_app_start is called in Round 3. 00:05:37.196 Shutdown signal received, stop current app iteration 00:05:37.196 22:53:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.196 22:53:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:37.196 00:05:37.196 real 0m15.466s 00:05:37.196 user 0m33.364s 00:05:37.196 sys 0m2.095s 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.196 22:53:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 ************************************ 00:05:37.196 END TEST app_repeat 00:05:37.196 ************************************ 00:05:37.196 22:53:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:37.196 22:53:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.196 22:53:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.196 22:53:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.196 22:53:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.196 ************************************ 00:05:37.196 START TEST cpu_locks 00:05:37.196 ************************************ 00:05:37.196 22:53:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.196 * Looking for test storage... 00:05:37.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.196 22:53:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:37.197 22:53:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:37.197 22:53:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:37.197 22:53:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:37.197 22:53:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.197 22:53:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.197 22:53:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.197 ************************************ 00:05:37.197 START TEST default_locks 00:05:37.197 ************************************ 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=630641 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 630641 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 630641 ']' 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.197 22:53:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.197 [2024-07-24 22:53:54.896927] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:37.197 [2024-07-24 22:53:54.896987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630641 ] 00:05:37.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.197 [2024-07-24 22:53:54.965534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.458 [2024-07-24 22:53:55.031317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.028 22:53:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.028 22:53:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:38.028 22:53:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 630641 00:05:38.028 22:53:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 630641 00:05:38.028 22:53:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.599 lslocks: write error 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 630641 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 630641 ']' 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 630641 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 630641 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 630641' 00:05:38.599 killing process with pid 630641 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 630641 00:05:38.599 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 630641 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 630641 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 630641 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 630641 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 630641 ']' 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (630641) - No such process 00:05:38.860 ERROR: process (pid: 630641) is no longer running 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.860 00:05:38.860 real 0m1.632s 00:05:38.860 user 0m1.730s 00:05:38.860 sys 0m0.550s 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.860 22:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.860 ************************************ 00:05:38.860 END TEST default_locks 00:05:38.860 ************************************ 00:05:38.860 22:53:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.860 22:53:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.860 22:53:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.860 22:53:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.860 ************************************ 00:05:38.860 START TEST default_locks_via_rpc 00:05:38.860 ************************************ 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=631012 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 631012 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 631012 ']' 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.860 22:53:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.860 [2024-07-24 22:53:56.587534] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:38.860 [2024-07-24 22:53:56.587600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631012 ] 00:05:38.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.121 [2024-07-24 22:53:56.659514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.121 [2024-07-24 22:53:56.734124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 631012 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 631012 00:05:39.691 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 631012 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 631012 ']' 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 631012 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631012 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631012' 00:05:40.262 killing process with pid 631012 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 631012 00:05:40.262 22:53:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 631012 00:05:40.262 00:05:40.262 real 0m1.502s 00:05:40.262 user 0m1.575s 00:05:40.262 sys 0m0.523s 00:05:40.262 22:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.262 22:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 ************************************ 00:05:40.262 END TEST default_locks_via_rpc 00:05:40.262 ************************************ 00:05:40.522 22:53:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.522 22:53:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.522 22:53:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.522 22:53:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.522 ************************************ 00:05:40.522 START TEST non_locking_app_on_locked_coremask 00:05:40.522 ************************************ 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=631319 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 631319 /var/tmp/spdk.sock 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 631319 ']' 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.522 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.522 [2024-07-24 22:53:58.173761] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:40.522 [2024-07-24 22:53:58.173822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631319 ] 00:05:40.522 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.522 [2024-07-24 22:53:58.244006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.783 [2024-07-24 22:53:58.318900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=631582 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 631582 /var/tmp/spdk2.sock 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 631582 ']' 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.353 22:53:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.353 [2024-07-24 22:53:58.977092] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:41.353 [2024-07-24 22:53:58.977145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631582 ] 00:05:41.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.353 [2024-07-24 22:53:59.075118] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.353 [2024-07-24 22:53:59.075142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.614 [2024-07-24 22:53:59.204989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.185 22:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.185 22:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:42.185 22:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 631319 00:05:42.185 22:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 631319 00:05:42.185 22:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.757 lslocks: write error 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 631319 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 631319 ']' 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 631319 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631319 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631319' 00:05:42.757 killing process with pid 631319 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 631319 00:05:42.757 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 631319 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 631582 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 631582 ']' 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 631582 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631582 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631582' 00:05:43.018 killing process with pid 631582 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 631582 00:05:43.018 22:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 631582 00:05:43.283 00:05:43.283 real 0m2.905s 00:05:43.283 user 0m3.150s 00:05:43.283 sys 0m0.890s 00:05:43.283 22:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.283 22:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.283 ************************************ 00:05:43.283 END TEST non_locking_app_on_locked_coremask 00:05:43.283 ************************************ 00:05:43.283 22:54:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.283 22:54:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.283 22:54:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.283 22:54:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.564 ************************************ 00:05:43.564 START TEST locking_app_on_unlocked_coremask 00:05:43.564 ************************************ 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=632019 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 632019 /var/tmp/spdk.sock 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 632019 ']' 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.564 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.564 [2024-07-24 22:54:01.148362] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:43.564 [2024-07-24 22:54:01.148416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632019 ] 00:05:43.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.564 [2024-07-24 22:54:01.217547] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.564 [2024-07-24 22:54:01.217579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.564 [2024-07-24 22:54:01.289602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=632353 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 632353 /var/tmp/spdk2.sock 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 632353 ']' 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.136 22:54:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.397 [2024-07-24 22:54:01.936367] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:44.397 [2024-07-24 22:54:01.936423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632353 ] 00:05:44.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.397 [2024-07-24 22:54:02.039343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.397 [2024-07-24 22:54:02.168306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.968 22:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.968 22:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.968 22:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 632353 00:05:44.968 22:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 632353 00:05:44.968 22:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.539 lslocks: write error 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 632019 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 632019 ']' 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 632019 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632019 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632019' 00:05:45.539 killing process with pid 632019 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 632019 00:05:45.539 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 632019 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 632353 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 632353 ']' 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 632353 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632353 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632353' 00:05:46.110 killing process with pid 632353 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 632353 00:05:46.110 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 632353 00:05:46.372 00:05:46.372 real 0m2.816s 00:05:46.372 user 0m3.041s 00:05:46.372 sys 0m0.861s 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.372 ************************************ 00:05:46.372 END TEST locking_app_on_unlocked_coremask 00:05:46.372 ************************************ 00:05:46.372 22:54:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.372 22:54:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.372 22:54:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.372 22:54:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.372 ************************************ 00:05:46.372 START TEST locking_app_on_locked_coremask 00:05:46.372 ************************************ 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=632775 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 632775 /var/tmp/spdk.sock 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 632775 ']' 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.372 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.372 [2024-07-24 22:54:04.045555] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:46.372 [2024-07-24 22:54:04.045616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632775 ] 00:05:46.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.372 [2024-07-24 22:54:04.116020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.633 [2024-07-24 22:54:04.190580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=632820 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 632820 /var/tmp/spdk2.sock 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 632820 /var/tmp/spdk2.sock 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 632820 /var/tmp/spdk2.sock 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 632820 ']' 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.205 22:54:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.205 [2024-07-24 22:54:04.831429] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:47.205 [2024-07-24 22:54:04.831482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632820 ] 00:05:47.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.205 [2024-07-24 22:54:04.930401] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 632775 has claimed it. 00:05:47.205 [2024-07-24 22:54:04.930440] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (632820) - No such process 00:05:47.776 ERROR: process (pid: 632820) is no longer running 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.776 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 632775 00:05:47.777 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 632775 00:05:47.777 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.357 lslocks: write error 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 632775 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 632775 ']' 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 632775 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632775 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632775' 00:05:48.357 killing process with pid 632775 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 632775 00:05:48.357 22:54:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 632775 00:05:48.357 00:05:48.357 real 0m2.150s 00:05:48.357 user 0m2.348s 00:05:48.357 sys 0m0.624s 00:05:48.357 22:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.357 22:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.357 ************************************ 00:05:48.357 END TEST locking_app_on_locked_coremask 00:05:48.357 ************************************ 00:05:48.618 22:54:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.618 22:54:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.618 22:54:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.618 22:54:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.618 ************************************ 00:05:48.618 START TEST locking_overlapped_coremask 00:05:48.618 ************************************ 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=633150 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 633150 /var/tmp/spdk.sock 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 633150 ']' 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.618 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.618 [2024-07-24 22:54:06.267642] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:48.618 [2024-07-24 22:54:06.267692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633150 ] 00:05:48.618 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.618 [2024-07-24 22:54:06.333735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.618 [2024-07-24 22:54:06.401336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.618 [2024-07-24 22:54:06.401452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.618 [2024-07-24 22:54:06.401455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=633478 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 633478 /var/tmp/spdk2.sock 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 633478 /var/tmp/spdk2.sock 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 633478 /var/tmp/spdk2.sock 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 633478 ']' 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.562 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.562 [2024-07-24 22:54:07.060411] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:49.562 [2024-07-24 22:54:07.060468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633478 ] 00:05:49.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.562 [2024-07-24 22:54:07.140731] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 633150 has claimed it. 00:05:49.562 [2024-07-24 22:54:07.140765] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (633478) - No such process 00:05:50.134 ERROR: process (pid: 633478) is no longer running 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 633150 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 633150 ']' 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 633150 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 633150 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 633150' 00:05:50.134 killing process with pid 633150 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 633150 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 633150 00:05:50.134 00:05:50.134 real 0m1.718s 00:05:50.134 user 0m4.808s 00:05:50.134 sys 0m0.362s 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.134 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.134 ************************************ 00:05:50.134 END TEST locking_overlapped_coremask 00:05:50.134 ************************************ 00:05:50.396 22:54:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:50.396 22:54:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.396 22:54:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.396 22:54:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.396 ************************************ 00:05:50.396 START TEST locking_overlapped_coremask_via_rpc 00:05:50.396 ************************************ 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=633525 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 633525 /var/tmp/spdk.sock 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 633525 ']' 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.396 22:54:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.396 [2024-07-24 22:54:08.059841] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:50.396 [2024-07-24 22:54:08.059903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633525 ] 00:05:50.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.396 [2024-07-24 22:54:08.131580] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.396 [2024-07-24 22:54:08.131615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.657 [2024-07-24 22:54:08.211038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.657 [2024-07-24 22:54:08.211163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.657 [2024-07-24 22:54:08.211166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.229 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.229 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.229 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=633863 00:05:51.229 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 633863 /var/tmp/spdk2.sock 00:05:51.229 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 633863 ']' 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.230 22:54:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.230 [2024-07-24 22:54:08.884806] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:51.230 [2024-07-24 22:54:08.884859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633863 ] 00:05:51.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.230 [2024-07-24 22:54:08.966671] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.230 [2024-07-24 22:54:08.966694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.490 [2024-07-24 22:54:09.072470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.490 [2024-07-24 22:54:09.072615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.490 [2024-07-24 22:54:09.072617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.061 [2024-07-24 22:54:09.663812] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 633525 has claimed it. 00:05:52.061 request: 00:05:52.061 { 00:05:52.061 "method": "framework_enable_cpumask_locks", 00:05:52.061 "req_id": 1 00:05:52.061 } 00:05:52.061 Got JSON-RPC error response 00:05:52.061 response: 00:05:52.061 { 00:05:52.061 "code": -32603, 00:05:52.061 "message": "Failed to claim CPU core: 2" 00:05:52.061 } 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 633525 /var/tmp/spdk.sock 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 633525 ']' 00:05:52.061 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.062 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.062 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.062 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.062 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 633863 /var/tmp/spdk2.sock 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 633863 ']' 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.322 22:54:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.322 00:05:52.322 real 0m2.011s 00:05:52.322 user 0m0.759s 00:05:52.322 sys 0m0.178s 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.322 22:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.322 ************************************ 00:05:52.322 END TEST locking_overlapped_coremask_via_rpc 00:05:52.322 ************************************ 00:05:52.322 22:54:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:52.322 22:54:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 633525 ]] 00:05:52.322 22:54:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 633525 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 633525 ']' 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 633525 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 633525 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 633525' 00:05:52.322 killing process with pid 633525 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 633525 00:05:52.322 22:54:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 633525 00:05:52.583 22:54:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 633863 ]] 00:05:52.583 22:54:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 633863 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 633863 ']' 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 633863 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 633863 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 633863' 00:05:52.583 killing process with pid 633863 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 633863 00:05:52.583 22:54:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 633863 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 633525 ]] 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 633525 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 633525 ']' 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 633525 00:05:52.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (633525) - No such process 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 633525 is not found' 00:05:52.844 Process with pid 633525 is not found 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 633863 ]] 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 633863 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 633863 ']' 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 633863 00:05:52.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (633863) - No such process 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 633863 is not found' 00:05:52.844 Process with pid 633863 is not found 00:05:52.844 22:54:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.844 00:05:52.844 real 0m15.878s 00:05:52.844 user 0m26.871s 00:05:52.844 sys 0m4.901s 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.844 22:54:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.844 ************************************ 00:05:52.844 END TEST cpu_locks 00:05:52.844 ************************************ 00:05:52.844 00:05:52.844 real 0m41.242s 00:05:52.844 user 1m19.535s 00:05:52.844 sys 0m7.945s 00:05:52.844 22:54:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.844 22:54:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.844 ************************************ 00:05:52.844 END TEST event 00:05:52.844 ************************************ 00:05:53.105 22:54:10 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.105 22:54:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.105 22:54:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.105 22:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:53.105 ************************************ 00:05:53.105 START TEST thread 00:05:53.105 ************************************ 00:05:53.105 22:54:10 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.105 * Looking for test storage... 00:05:53.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:53.105 22:54:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:53.105 22:54:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:53.105 22:54:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.105 22:54:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.105 ************************************ 00:05:53.105 START TEST thread_poller_perf 00:05:53.105 ************************************ 00:05:53.105 22:54:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:53.105 [2024-07-24 22:54:10.841606] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:53.105 [2024-07-24 22:54:10.841712] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634687 ] 00:05:53.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.366 [2024-07-24 22:54:10.926648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.366 [2024-07-24 22:54:11.001154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.366 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:54.307 ====================================== 00:05:54.307 busy:2413597444 (cyc) 00:05:54.307 total_run_count: 287000 00:05:54.307 tsc_hz: 2400000000 (cyc) 00:05:54.307 ====================================== 00:05:54.307 poller_cost: 8409 (cyc), 3503 (nsec) 00:05:54.307 00:05:54.307 real 0m1.246s 00:05:54.307 user 0m1.152s 00:05:54.307 sys 0m0.089s 00:05:54.307 22:54:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.307 22:54:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.307 ************************************ 00:05:54.307 END TEST thread_poller_perf 00:05:54.307 ************************************ 00:05:54.568 22:54:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.568 22:54:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.568 22:54:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.568 22:54:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.568 ************************************ 00:05:54.568 START TEST thread_poller_perf 00:05:54.568 ************************************ 00:05:54.568 22:54:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.568 [2024-07-24 22:54:12.163852] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:54.568 [2024-07-24 22:54:12.163960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635093 ] 00:05:54.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.568 [2024-07-24 22:54:12.240485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.568 [2024-07-24 22:54:12.307345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.568 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:55.952 ====================================== 00:05:55.952 busy:2401928120 (cyc) 00:05:55.952 total_run_count: 3811000 00:05:55.952 tsc_hz: 2400000000 (cyc) 00:05:55.952 ====================================== 00:05:55.952 poller_cost: 630 (cyc), 262 (nsec) 00:05:55.952 00:05:55.952 real 0m1.221s 00:05:55.952 user 0m1.131s 00:05:55.952 sys 0m0.087s 00:05:55.952 22:54:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.952 22:54:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.952 ************************************ 00:05:55.952 END TEST thread_poller_perf 00:05:55.952 ************************************ 00:05:55.952 22:54:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:55.952 00:05:55.952 real 0m2.718s 00:05:55.952 user 0m2.386s 00:05:55.952 sys 0m0.340s 00:05:55.952 22:54:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.952 22:54:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.952 ************************************ 00:05:55.952 END TEST thread 00:05:55.952 ************************************ 00:05:55.952 22:54:13 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:55.952 22:54:13 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:55.952 22:54:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.952 22:54:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.952 22:54:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.952 ************************************ 00:05:55.952 START TEST app_cmdline 00:05:55.952 ************************************ 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:55.952 * Looking for test storage... 00:05:55.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:55.952 22:54:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:55.952 22:54:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=635358 00:05:55.952 22:54:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 635358 00:05:55.952 22:54:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 635358 ']' 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.952 22:54:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.952 [2024-07-24 22:54:13.638545] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:05:55.952 [2024-07-24 22:54:13.638629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635358 ] 00:05:55.952 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.952 [2024-07-24 22:54:13.712001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.213 [2024-07-24 22:54:13.787299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.783 22:54:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.783 22:54:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:56.783 22:54:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:56.783 { 00:05:56.783 "version": "SPDK v24.09-pre git sha1 415e0bb41", 00:05:56.783 "fields": { 00:05:56.783 "major": 24, 00:05:56.783 "minor": 9, 00:05:56.783 "patch": 0, 00:05:56.783 "suffix": "-pre", 00:05:56.783 "commit": "415e0bb41" 00:05:56.783 } 00:05:56.783 } 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.044 request: 00:05:57.044 { 00:05:57.044 "method": "env_dpdk_get_mem_stats", 00:05:57.044 "req_id": 1 00:05:57.044 } 00:05:57.044 Got JSON-RPC error response 00:05:57.044 response: 00:05:57.044 { 00:05:57.044 "code": -32601, 00:05:57.044 "message": "Method not found" 00:05:57.044 } 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.044 22:54:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 635358 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 635358 ']' 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 635358 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.044 22:54:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 635358 00:05:57.304 22:54:14 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.304 22:54:14 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.304 22:54:14 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 635358' 00:05:57.304 killing process with pid 635358 00:05:57.304 22:54:14 app_cmdline -- common/autotest_common.sh@969 -- # kill 635358 00:05:57.304 22:54:14 app_cmdline -- common/autotest_common.sh@974 -- # wait 635358 00:05:57.304 00:05:57.304 real 0m1.571s 00:05:57.304 user 0m1.894s 00:05:57.304 sys 0m0.404s 00:05:57.304 22:54:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.304 22:54:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.305 ************************************ 00:05:57.305 END TEST app_cmdline 00:05:57.305 ************************************ 00:05:57.305 22:54:15 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.305 22:54:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.305 22:54:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.305 22:54:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.565 ************************************ 00:05:57.565 START TEST version 00:05:57.565 ************************************ 00:05:57.565 22:54:15 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.565 * Looking for test storage... 00:05:57.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:57.566 22:54:15 version -- app/version.sh@17 -- # get_header_version major 00:05:57.566 22:54:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # cut -f2 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.566 22:54:15 version -- app/version.sh@17 -- # major=24 00:05:57.566 22:54:15 version -- app/version.sh@18 -- # get_header_version minor 00:05:57.566 22:54:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # cut -f2 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.566 22:54:15 version -- app/version.sh@18 -- # minor=9 00:05:57.566 22:54:15 version -- app/version.sh@19 -- # get_header_version patch 00:05:57.566 22:54:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # cut -f2 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.566 22:54:15 version -- app/version.sh@19 -- # patch=0 00:05:57.566 22:54:15 version -- app/version.sh@20 -- # get_header_version suffix 00:05:57.566 22:54:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # cut -f2 00:05:57.566 22:54:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.566 22:54:15 version -- app/version.sh@20 -- # suffix=-pre 00:05:57.566 22:54:15 version -- app/version.sh@22 -- # version=24.9 00:05:57.566 22:54:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:57.566 22:54:15 version -- app/version.sh@28 -- # version=24.9rc0 00:05:57.566 22:54:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:57.566 22:54:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:57.566 22:54:15 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:57.566 22:54:15 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:57.566 00:05:57.566 real 0m0.176s 00:05:57.566 user 0m0.099s 00:05:57.566 sys 0m0.118s 00:05:57.566 22:54:15 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.566 22:54:15 version -- common/autotest_common.sh@10 -- # set +x 00:05:57.566 ************************************ 00:05:57.566 END TEST version 00:05:57.566 ************************************ 00:05:57.566 22:54:15 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:57.566 22:54:15 -- spdk/autotest.sh@202 -- # uname -s 00:05:57.566 22:54:15 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:57.566 22:54:15 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:57.566 22:54:15 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:57.566 22:54:15 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:57.566 22:54:15 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:57.566 22:54:15 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:57.566 22:54:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.566 22:54:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 22:54:15 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:57.827 22:54:15 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:57.827 22:54:15 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:57.827 22:54:15 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:57.827 22:54:15 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:57.827 22:54:15 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:57.827 22:54:15 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.827 22:54:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.827 22:54:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.827 22:54:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 ************************************ 00:05:57.827 START TEST nvmf_tcp 00:05:57.827 ************************************ 00:05:57.827 22:54:15 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.827 * Looking for test storage... 00:05:57.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:57.827 22:54:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:57.827 22:54:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:57.827 22:54:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:57.827 22:54:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.827 22:54:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.827 22:54:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 ************************************ 00:05:57.827 START TEST nvmf_target_core 00:05:57.827 ************************************ 00:05:57.827 22:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:58.089 * Looking for test storage... 00:05:58.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:58.089 ************************************ 00:05:58.089 START TEST nvmf_abort 00:05:58.089 ************************************ 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:58.089 * Looking for test storage... 00:05:58.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.089 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.090 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:58.351 22:54:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:06.539 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:06.539 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:06.539 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:06.540 Found net devices under 0000:31:00.0: cvl_0_0 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:06.540 Found net devices under 0000:31:00.1: cvl_0_1 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:06.540 22:54:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:06.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:06.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:06:06.540 00:06:06.540 --- 10.0.0.2 ping statistics --- 00:06:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.540 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:06.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:06.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:06:06.540 00:06:06.540 --- 10.0.0.1 ping statistics --- 00:06:06.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.540 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=640293 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 640293 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 640293 ']' 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.540 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.540 [2024-07-24 22:54:24.144216] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:06.540 [2024-07-24 22:54:24.144266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:06.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.540 [2024-07-24 22:54:24.236483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.540 [2024-07-24 22:54:24.318848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:06.540 [2024-07-24 22:54:24.318910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:06.540 [2024-07-24 22:54:24.318918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.540 [2024-07-24 22:54:24.318925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.540 [2024-07-24 22:54:24.318936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:06.540 [2024-07-24 22:54:24.319075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.540 [2024-07-24 22:54:24.319244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.540 [2024-07-24 22:54:24.319244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.481 [2024-07-24 22:54:24.960972] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.481 Malloc0 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.481 22:54:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.481 Delay0 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 [2024-07-24 22:54:25.037001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.482 22:54:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:07.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.482 [2024-07-24 22:54:25.198955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:10.022 Initializing NVMe Controllers 00:06:10.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:10.022 controller IO queue size 128 less than required 00:06:10.022 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:10.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:10.022 Initialization complete. Launching workers. 00:06:10.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28641 00:06:10.022 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28702, failed to submit 62 00:06:10.022 success 28645, unsuccess 57, failed 0 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:10.022 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:10.023 rmmod nvme_tcp 00:06:10.023 rmmod nvme_fabrics 00:06:10.023 rmmod nvme_keyring 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 640293 ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 640293 ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 640293' 00:06:10.023 killing process with pid 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 640293 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.023 22:54:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.933 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:11.933 00:06:11.933 real 0m13.942s 00:06:11.933 user 0m14.025s 00:06:11.933 sys 0m6.963s 00:06:11.933 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.933 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.933 ************************************ 00:06:11.933 END TEST nvmf_abort 00:06:11.933 ************************************ 00:06:12.193 22:54:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 ************************************ 00:06:12.194 START TEST nvmf_ns_hotplug_stress 00:06:12.194 ************************************ 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.194 * Looking for test storage... 00:06:12.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:12.194 22:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.337 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:20.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:20.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:20.338 Found net devices under 0000:31:00.0: cvl_0_0 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:20.338 Found net devices under 0000:31:00.1: cvl_0_1 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.338 22:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:06:20.338 00:06:20.338 --- 10.0.0.2 ping statistics --- 00:06:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.338 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:06:20.338 00:06:20.338 --- 10.0.0.1 ping statistics --- 00:06:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.338 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=645675 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 645675 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 645675 ']' 00:06:20.338 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.339 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.339 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.339 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.339 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.339 [2024-07-24 22:54:38.111848] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:06:20.339 [2024-07-24 22:54:38.111898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.599 [2024-07-24 22:54:38.203325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.599 [2024-07-24 22:54:38.279029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.599 [2024-07-24 22:54:38.279080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.599 [2024-07-24 22:54:38.279089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.599 [2024-07-24 22:54:38.279096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.599 [2024-07-24 22:54:38.279102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.599 [2024-07-24 22:54:38.279224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.599 [2024-07-24 22:54:38.279388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.599 [2024-07-24 22:54:38.279388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:21.170 22:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:21.431 [2024-07-24 22:54:39.065755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.431 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.692 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.692 [2024-07-24 22:54:39.422749] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.692 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:21.952 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:22.213 Malloc0 00:06:22.213 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.213 Delay0 00:06:22.213 22:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.474 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:22.474 NULL1 00:06:22.735 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:22.735 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:22.735 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=646063 00:06:22.735 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:22.735 22:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.129 Read completed with error (sct=0, sc=11) 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 22:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.129 22:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:24.129 22:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:24.129 true 00:06:24.395 22:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:24.395 22:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.338 22:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.339 22:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:25.339 22:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:25.339 true 00:06:25.339 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:25.339 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.599 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.859 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.859 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.859 true 00:06:25.859 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:25.859 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.119 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.379 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:26.379 22:54:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:26.379 true 00:06:26.379 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:26.379 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.640 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.901 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:26.901 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:26.901 true 00:06:26.901 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:26.901 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.161 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.162 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:27.162 22:54:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:27.422 true 00:06:27.422 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:27.422 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.683 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.683 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:27.683 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:27.944 true 00:06:27.944 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:27.945 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.205 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.205 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:28.205 22:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:28.466 true 00:06:28.466 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:28.466 22:54:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.407 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.407 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:29.407 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:29.669 true 00:06:29.669 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:29.669 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.930 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.930 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:29.930 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:30.190 true 00:06:30.190 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:30.190 22:54:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.460 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.460 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:30.460 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:30.786 true 00:06:30.786 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:30.786 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.786 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.047 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:31.047 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:31.308 true 00:06:31.308 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:31.308 22:54:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.308 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.569 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.569 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:31.569 true 00:06:31.829 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:31.829 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.829 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.089 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:32.089 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.089 true 00:06:32.349 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:32.349 22:54:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.349 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.609 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:32.609 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:32.609 true 00:06:32.609 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:32.609 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.870 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.131 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:33.131 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:33.131 true 00:06:33.131 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:33.131 22:54:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.392 22:54:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.652 22:54:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:33.652 22:54:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:33.652 true 00:06:33.652 22:54:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:33.652 22:54:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.594 22:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.855 22:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:34.855 22:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:34.855 true 00:06:35.116 22:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:35.116 22:54:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.056 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.056 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:36.056 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:36.056 true 00:06:36.056 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:36.056 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.317 22:54:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.578 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:36.578 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:36.578 true 00:06:36.578 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:36.578 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.840 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.840 [2024-07-24 22:54:54.621813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.621867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.621897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.621923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.621953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.621981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.622957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.840 [2024-07-24 22:54:54.623310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.623748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.624994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:36.841 [2024-07-24 22:54:54.625923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.626980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.128 [2024-07-24 22:54:54.627602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.627989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.628987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.629982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.129 [2024-07-24 22:54:54.630931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.630958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.630985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.631981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.632975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.130 [2024-07-24 22:54:54.633796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.633999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.634990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.635993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.636994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.131 [2024-07-24 22:54:54.637164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.637979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.638763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.639984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.132 [2024-07-24 22:54:54.640229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.640994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.641985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.133 [2024-07-24 22:54:54.642695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.642998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.643999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.644757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.645604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.134 [2024-07-24 22:54:54.646307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.646984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.647816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.135 [2024-07-24 22:54:54.648554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.135 [2024-07-24 22:54:54.648724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.648972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.649948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.650995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.136 [2024-07-24 22:54:54.651619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.651996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.652989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:37.137 [2024-07-24 22:54:54.653304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:37.137 [2024-07-24 22:54:54.653664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.653989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.137 [2024-07-24 22:54:54.654309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.654974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.655987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.138 [2024-07-24 22:54:54.656473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.656989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.657991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.658973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.139 [2024-07-24 22:54:54.659490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.659990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.660893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.661994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.662019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.662047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.662085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.140 [2024-07-24 22:54:54.662113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.662978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.663979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.664998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.665029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.665061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.665090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.141 [2024-07-24 22:54:54.665123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.665536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.666986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.142 [2024-07-24 22:54:54.667896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.667924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.667952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.668991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.669989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.670971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.143 [2024-07-24 22:54:54.671257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.671992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.672860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.144 [2024-07-24 22:54:54.673819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.673864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.673894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.673923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.673955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.673985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.674881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.675988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.676995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.145 [2024-07-24 22:54:54.677374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.677967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.678987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.679976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.146 [2024-07-24 22:54:54.680504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.680879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.681979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.682980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.683997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.147 [2024-07-24 22:54:54.684304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.148 [2024-07-24 22:54:54.684393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.684986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.685982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.686973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.148 [2024-07-24 22:54:54.687016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.687926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.688985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.689980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.149 [2024-07-24 22:54:54.690701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.690973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.691995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.692971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.150 [2024-07-24 22:54:54.693993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.694814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.695984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.696969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.151 [2024-07-24 22:54:54.697516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.697996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.698984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.699978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.152 [2024-07-24 22:54:54.700764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.700995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.701779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.702975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.153 [2024-07-24 22:54:54.703561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.703808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.704969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.705994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.706998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.154 [2024-07-24 22:54:54.707197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.707978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.708975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.709978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.155 [2024-07-24 22:54:54.710534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.710760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.711986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.712995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.713971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.156 [2024-07-24 22:54:54.714182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.714977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.715993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.716995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.157 [2024-07-24 22:54:54.717505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.717536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.717560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.717592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.718984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.719990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.158 [2024-07-24 22:54:54.720823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.159 [2024-07-24 22:54:54.720850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.720881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.720910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.720941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.720973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.721978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.722974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.723981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.724013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.159 [2024-07-24 22:54:54.724043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.724988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.725994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.726827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.160 [2024-07-24 22:54:54.727711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.727995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.728977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.729993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.730979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.731006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.731035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.731062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.731089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.161 [2024-07-24 22:54:54.731120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.731994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.732972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.733783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.162 [2024-07-24 22:54:54.734276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.734975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.735980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.736977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.163 [2024-07-24 22:54:54.737559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.737932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.738980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.739999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.740592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.164 [2024-07-24 22:54:54.741851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.741882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.741918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.741947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.741978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.742969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.743968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.165 [2024-07-24 22:54:54.744971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.745989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.746993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.747863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.166 [2024-07-24 22:54:54.748603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.748986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.749972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.750984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.167 [2024-07-24 22:54:54.751514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.751988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.752992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.753992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.754758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.755123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.755152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.168 [2024-07-24 22:54:54.755180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.755992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.756980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.169 [2024-07-24 22:54:54.757539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.757994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.169 [2024-07-24 22:54:54.758591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.758997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.759973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.760996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.761740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.170 [2024-07-24 22:54:54.762254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.762991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.763996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.764987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.171 [2024-07-24 22:54:54.765350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.765997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.766961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.767987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.768972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.172 [2024-07-24 22:54:54.769241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.769976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.770979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.771993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.173 [2024-07-24 22:54:54.772889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.772921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.772950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.772980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.773988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.774981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.775986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.174 [2024-07-24 22:54:54.776526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.776973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.777993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.175 [2024-07-24 22:54:54.778843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.778874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.778905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.778932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.778974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.779663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.780994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.176 [2024-07-24 22:54:54.781535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.781943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.782986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.783984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.784991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.785041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.177 [2024-07-24 22:54:54.785071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.785992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.786778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.178 [2024-07-24 22:54:54.787987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.788916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.789992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.790797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.179 [2024-07-24 22:54:54.791322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.791999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.792999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.793624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.180 [2024-07-24 22:54:54.794073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.180 [2024-07-24 22:54:54.794371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.794984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.795969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.181 [2024-07-24 22:54:54.796562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.796990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 true 00:06:37.182 [2024-07-24 22:54:54.797018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.797996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.798977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.182 [2024-07-24 22:54:54.799255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.799807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.800973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.801995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.802968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.183 [2024-07-24 22:54:54.803005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.803971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.804978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.805995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.806026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.184 [2024-07-24 22:54:54.806057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.806671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.807986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.808988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.185 [2024-07-24 22:54:54.809839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.809867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.809898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.809925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.809968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.809996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.810993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.811970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.186 [2024-07-24 22:54:54.812549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.812983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.813987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.814986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.187 [2024-07-24 22:54:54.815917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.816984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.817974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.818995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.188 [2024-07-24 22:54:54.819555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.819980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.820975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.821971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:37.189 [2024-07-24 22:54:54.822345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.189 [2024-07-24 22:54:54.822710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.822997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.823029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.823056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.189 [2024-07-24 22:54:54.823086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.823974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.824998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.825987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.190 [2024-07-24 22:54:54.826566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.826979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.827898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.828974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.191 [2024-07-24 22:54:54.829194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.191 [2024-07-24 22:54:54.829429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.829992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.830982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.831963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.832975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.833006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.833039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.833069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.833099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.192 [2024-07-24 22:54:54.833130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.833969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.834852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.835985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.193 [2024-07-24 22:54:54.836464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.836986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.837990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.838977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.839991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.840025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.840057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.840087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.194 [2024-07-24 22:54:54.840136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.840985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.841733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.842975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.843005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.843033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.843062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.195 [2024-07-24 22:54:54.843094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.843988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.844997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.845983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.196 [2024-07-24 22:54:54.846911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.846942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.846970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.846998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.847997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.848740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.849997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.850022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.850046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.850071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.197 [2024-07-24 22:54:54.850095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.850960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.851991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.852988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.198 [2024-07-24 22:54:54.853728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.853973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.854978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.855971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.856980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.857010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.857041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.857068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.199 [2024-07-24 22:54:54.857099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.857976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.858977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.200 [2024-07-24 22:54:54.859879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.859908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.859935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.859965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.859993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.860986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.861836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.862981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.201 [2024-07-24 22:54:54.863652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.863984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.864995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.865992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.202 [2024-07-24 22:54:54.866048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.202 [2024-07-24 22:54:54.866911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.866938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.866967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.866999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.867649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.868997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.869985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.203 [2024-07-24 22:54:54.870466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.870988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.871992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.872982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.204 [2024-07-24 22:54:54.873927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.873955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.874779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.875997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.205 [2024-07-24 22:54:54.876762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.876792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.876845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.876876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.877971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.878992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.879981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.206 [2024-07-24 22:54:54.880360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.880987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.881971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.882978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.883970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.207 [2024-07-24 22:54:54.884006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.884977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.885966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.886988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.208 [2024-07-24 22:54:54.887436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.887984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.209 [2024-07-24 22:54:54.888725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.889977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.890976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.891975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.892003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.497 [2024-07-24 22:54:54.892031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.892997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.893967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.894970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.498 [2024-07-24 22:54:54.895473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.895792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.896993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.897976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.898976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.499 [2024-07-24 22:54:54.899206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.899981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.900999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 Message suppressed 999 times: [2024-07-24 22:54:54.901395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 Read completed with error (sct=0, sc=15) 00:06:37.500 [2024-07-24 22:54:54.901430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.901978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.500 [2024-07-24 22:54:54.902683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.902862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.903968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.904986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.501 [2024-07-24 22:54:54.905636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.905977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.906987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.907999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.502 [2024-07-24 22:54:54.908652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.908984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.909991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.910980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.911985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.503 [2024-07-24 22:54:54.912151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.912180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.912211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.912235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.912264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.912289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.913982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.914989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.915989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.504 [2024-07-24 22:54:54.916181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.916986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.917980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.918981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.505 [2024-07-24 22:54:54.919709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.919985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.920941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.921879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.506 [2024-07-24 22:54:54.922627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.922980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.923984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.924656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.507 [2024-07-24 22:54:54.925983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.926670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.927998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.928992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.508 [2024-07-24 22:54:54.929475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.929991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.930986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.931974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.509 [2024-07-24 22:54:54.932715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.932987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.933991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.934944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.510 [2024-07-24 22:54:54.935642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.935993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.936999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.511 [2024-07-24 22:54:54.937677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.937922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.938986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.511 [2024-07-24 22:54:54.939301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.939989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.940995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.941991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.512 [2024-07-24 22:54:54.942482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.942973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.943975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.944979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.513 [2024-07-24 22:54:54.945931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.945960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.945994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.946847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.947971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.948997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.514 [2024-07-24 22:54:54.949800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.949973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.950973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.951997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.515 [2024-07-24 22:54:54.952435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.952714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.953919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.954977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.955990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.516 [2024-07-24 22:54:54.956022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.956995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.957993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.958809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.517 [2024-07-24 22:54:54.959648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.959999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.960713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.961985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.518 [2024-07-24 22:54:54.962867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.962900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.962928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.963508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.964982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 [2024-07-24 22:54:54.965975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.519 22:54:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.520 [2024-07-24 22:54:55.138746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.138998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.139749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.140979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.520 [2024-07-24 22:54:55.141801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.141830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.141856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.141914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.141944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.141976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.142989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.143982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.144984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.521 [2024-07-24 22:54:55.145507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.145977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.146979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.147730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.522 [2024-07-24 22:54:55.148992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.149910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.150978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.151988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.523 [2024-07-24 22:54:55.152633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.152973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.153990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.154981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.524 [2024-07-24 22:54:55.155383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.155994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.156985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.157971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.525 [2024-07-24 22:54:55.158600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.158998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.159975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.160984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.161970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.526 [2024-07-24 22:54:55.162521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.162987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.163956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.164995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.527 [2024-07-24 22:54:55.165851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.165889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.165918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.165955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.166958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.167977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.168976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.528 [2024-07-24 22:54:55.169652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.169971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:37.529 [2024-07-24 22:54:55.170967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.170996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:37.529 [2024-07-24 22:54:55.171095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.171972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.529 [2024-07-24 22:54:55.172402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.172911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.530 [2024-07-24 22:54:55.173565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.173996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.174999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.175974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.530 [2024-07-24 22:54:55.176158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.176997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.177972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.178979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.531 [2024-07-24 22:54:55.179426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.179827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.180974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.181987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.182971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.183003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.183031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.183060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.532 [2024-07-24 22:54:55.183087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.183976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.184973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.185779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.186138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.186170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.186203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.533 [2024-07-24 22:54:55.186232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.186984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.187973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.188995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.534 [2024-07-24 22:54:55.189552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.189988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.190988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.191995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.535 [2024-07-24 22:54:55.192377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.192992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.193684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.194974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.536 [2024-07-24 22:54:55.195670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.195974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.196954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.197976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.198970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.199000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.199029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.199060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.537 [2024-07-24 22:54:55.199092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.199976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.200993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.538 [2024-07-24 22:54:55.201900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.201933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.201966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.201990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.202956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.203998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.204984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.205015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.539 [2024-07-24 22:54:55.205045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.205886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.206988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.207992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.540 [2024-07-24 22:54:55.208947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.208976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.209888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.541 [2024-07-24 22:54:55.210457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.210516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.211966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.541 [2024-07-24 22:54:55.212698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.212971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.213989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.214997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.215027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.542 [2024-07-24 22:54:55.215056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.215977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.216820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.543 [2024-07-24 22:54:55.217761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.217966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.218980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.219978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.220982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.221011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.221042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.221073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.544 [2024-07-24 22:54:55.221104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.221985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.222997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.223984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.224969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.225001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.225030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.545 [2024-07-24 22:54:55.225067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.225992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.226988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.227989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.546 [2024-07-24 22:54:55.228248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.228988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.229993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.230866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.547 [2024-07-24 22:54:55.231793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.231999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.232970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.233978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.548 [2024-07-24 22:54:55.234428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.234994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.235888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.236996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.237998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.238030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.238060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.549 [2024-07-24 22:54:55.238088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.238999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.239731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.240984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.550 [2024-07-24 22:54:55.241162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.241757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.242974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.243980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.551 [2024-07-24 22:54:55.244496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.552 [2024-07-24 22:54:55.244948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.244978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.245984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.246861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.552 [2024-07-24 22:54:55.247460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.247977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.248872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.553 [2024-07-24 22:54:55.249793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.249993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.250738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.251977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.252976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.253153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.253184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.554 [2024-07-24 22:54:55.253215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.253580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.254980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.255937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.555 [2024-07-24 22:54:55.256690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.256987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.556 [2024-07-24 22:54:55.257314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.257970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.847 [2024-07-24 22:54:55.258614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.258988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.259994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.260979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.848 [2024-07-24 22:54:55.261946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.261977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.262681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.263980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.264947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.849 [2024-07-24 22:54:55.265681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.265971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.266973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.267998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.268995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.269029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.850 [2024-07-24 22:54:55.269059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.269999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.270972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.271894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.851 [2024-07-24 22:54:55.272673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.272989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.273805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.274970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.852 [2024-07-24 22:54:55.275464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.275961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.276760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.277991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.278840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.853 [2024-07-24 22:54:55.279388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.279987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.280991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.281985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.854 [2024-07-24 22:54:55.282206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.854 [2024-07-24 22:54:55.282903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.282928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.282955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.282984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.283980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.284975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.285978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.855 [2024-07-24 22:54:55.286502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.286994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.287973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.288970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.856 [2024-07-24 22:54:55.289301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.289993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.290647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.291994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.857 [2024-07-24 22:54:55.292693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.292961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.293984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.294993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.295999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.858 [2024-07-24 22:54:55.296620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.296980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.297986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.298987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.859 [2024-07-24 22:54:55.299955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.299986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.300989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.301986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.302971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.860 [2024-07-24 22:54:55.303502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.303975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.304980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.861 [2024-07-24 22:54:55.305424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.305981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.306989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.862 [2024-07-24 22:54:55.307623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.307651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.307691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.307721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.307754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.307782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.308994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.309984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.310013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.310053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.310084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.863 [2024-07-24 22:54:55.310114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 true 00:06:37.864 [2024-07-24 22:54:55.310891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.310994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.311972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.864 [2024-07-24 22:54:55.312178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.312995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.313978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.865 [2024-07-24 22:54:55.314676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.314989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.315978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.866 [2024-07-24 22:54:55.316727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.316961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.317988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.867 [2024-07-24 22:54:55.318434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.318989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.867 [2024-07-24 22:54:55.319294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.319983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.320858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.321152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.868 [2024-07-24 22:54:55.321183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.321990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.322978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.869 [2024-07-24 22:54:55.323529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.323973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.324974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.325991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.870 [2024-07-24 22:54:55.326307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.326992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.327995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.328970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.871 [2024-07-24 22:54:55.329375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.329757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.330989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.331995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.332978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.333006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.333038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.333074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.333100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.872 [2024-07-24 22:54:55.333127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.333974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.334971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.873 [2024-07-24 22:54:55.335855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.335887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.335916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.335947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.336991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:37.874 [2024-07-24 22:54:55.337589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.874 [2024-07-24 22:54:55.337955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.337986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.338969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.874 [2024-07-24 22:54:55.339708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.339980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.340818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.341982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.875 [2024-07-24 22:54:55.342955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.342986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.343923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.344967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.345998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.876 [2024-07-24 22:54:55.346658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.346999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.347996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.348995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.349975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.350005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.350036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.877 [2024-07-24 22:54:55.350067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.350970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.351980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.878 [2024-07-24 22:54:55.352854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.352884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.352911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.352940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.879 [2024-07-24 22:54:55.353847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.353971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.354975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.355994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.879 [2024-07-24 22:54:55.356740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.356974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.357997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.358972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.359984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.360017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.360048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.880 [2024-07-24 22:54:55.360076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.360969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.361980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.362985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.881 [2024-07-24 22:54:55.363536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.363997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.364981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.365982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.882 [2024-07-24 22:54:55.366809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.366938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.366969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.367996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.368995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.883 [2024-07-24 22:54:55.369987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.370973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.371970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.372972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.884 [2024-07-24 22:54:55.373325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.373680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.374995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.375974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.885 [2024-07-24 22:54:55.376986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.377969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.378976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.379996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.886 [2024-07-24 22:54:55.380447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.380997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.381991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.382941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.887 [2024-07-24 22:54:55.383630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.383970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.384986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.385993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.386917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.387248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.387278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.387314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.888 [2024-07-24 22:54:55.387344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.387972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.388977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.389977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.889 [2024-07-24 22:54:55.390511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.390981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.391014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.391042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.889 [2024-07-24 22:54:55.391072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.391996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.392990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.393989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.890 [2024-07-24 22:54:55.394533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.394989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.395824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.396998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.397937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.891 [2024-07-24 22:54:55.398206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.398984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.399989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.400975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.892 [2024-07-24 22:54:55.401673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.401987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.402978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.403998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.404845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.893 [2024-07-24 22:54:55.405584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.405988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.406985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.407982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.408868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.409007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.409036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.409064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.894 [2024-07-24 22:54:55.409093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.409427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.410971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.895 [2024-07-24 22:54:55.411521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.411887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.412970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.413968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.414323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.896 [2024-07-24 22:54:55.414355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.414982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.415999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.416973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.897 [2024-07-24 22:54:55.417674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.417977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.418972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.419999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.420937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.421310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.421340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.421369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.421401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.898 [2024-07-24 22:54:55.421431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.421995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.422976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.423982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.899 [2024-07-24 22:54:55.424783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.424999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.425998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.900 [2024-07-24 22:54:55.426424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.426980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.427815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.900 [2024-07-24 22:54:55.428518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.428980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.429991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.430998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.901 [2024-07-24 22:54:55.431570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.431974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.432736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.433994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.902 [2024-07-24 22:54:55.434807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.434832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.434857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.434887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.434918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.435974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.436957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.437995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.903 [2024-07-24 22:54:55.438657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.438987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.439766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.440983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.441996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.904 [2024-07-24 22:54:55.442276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.442998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.443992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.444981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.445010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.445044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.905 [2024-07-24 22:54:55.445074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.445992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.446990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.906 [2024-07-24 22:54:55.447861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.447891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.447920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.447953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.447983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.448677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.449998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.450995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.451023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.451442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.907 [2024-07-24 22:54:55.451473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.451984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.452980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.453435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.908 [2024-07-24 22:54:55.454649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.454991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.455995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.456994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.457986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.909 [2024-07-24 22:54:55.458203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.458982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.459971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.460828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.910 [2024-07-24 22:54:55.461176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.461967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.462993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:37.911 [2024-07-24 22:54:55.463455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.463973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.911 [2024-07-24 22:54:55.464204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.464976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.465994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.912 [2024-07-24 22:54:55.466987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.467867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.468990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.469980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.470010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.913 [2024-07-24 22:54:55.470039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.470983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.471984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 [2024-07-24 22:54:55.472470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:37.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.914 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.176 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:38.176 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:38.176 true 00:06:38.176 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:38.176 22:54:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.437 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.437 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:38.437 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:38.697 true 00:06:38.697 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:38.697 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.958 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.958 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:38.958 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:39.218 true 00:06:39.218 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:39.218 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.218 22:54:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.479 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:39.479 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:39.739 true 00:06:39.739 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:39.739 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.000 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.000 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:40.000 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:40.260 true 00:06:40.260 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:40.260 22:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.522 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.522 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:40.522 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:40.782 true 00:06:40.782 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:40.782 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.782 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.042 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:41.042 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:41.303 true 00:06:41.303 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:41.303 22:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.245 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.245 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:42.245 22:54:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:42.505 true 00:06:42.505 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:42.505 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.766 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.766 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:42.766 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:43.026 true 00:06:43.026 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:43.026 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.026 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.287 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:43.287 22:55:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:43.547 true 00:06:43.547 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:43.547 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.547 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.807 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:43.807 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:44.067 true 00:06:44.067 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:44.067 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.067 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.328 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:44.328 22:55:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:44.588 true 00:06:44.588 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:44.588 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.588 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.848 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:44.848 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:44.848 true 00:06:45.109 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:45.109 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.109 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.379 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:45.379 22:55:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:45.379 true 00:06:45.379 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:45.379 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.639 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.898 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:45.898 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:45.898 true 00:06:45.898 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:45.898 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.158 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.419 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:46.419 22:55:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:46.419 true 00:06:46.419 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:46.419 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.680 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.941 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:46.941 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:46.941 true 00:06:46.941 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:46.941 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.201 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.201 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:47.201 22:55:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:47.462 true 00:06:47.462 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:47.462 22:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.404 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.664 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:48.664 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:48.664 true 00:06:48.664 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:48.664 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.925 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.925 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:48.925 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:49.184 true 00:06:49.184 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:49.184 22:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.444 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.444 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:49.444 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:49.704 true 00:06:49.705 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:49.705 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.965 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.965 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:49.965 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:50.225 true 00:06:50.225 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:50.225 22:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.486 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.486 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:50.486 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:50.747 true 00:06:50.747 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:50.747 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.747 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.008 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:51.008 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:51.376 true 00:06:51.376 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:51.376 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.376 22:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.637 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:51.637 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:51.637 true 00:06:51.637 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:51.637 22:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.579 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.839 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:52.839 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:52.839 true 00:06:52.839 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:52.839 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.100 Initializing NVMe Controllers 00:06:53.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:53.100 Controller IO queue size 128, less than required. 00:06:53.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:53.100 Controller IO queue size 128, less than required. 00:06:53.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:53.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:53.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:53.100 Initialization complete. Launching workers. 00:06:53.100 ======================================================== 00:06:53.100 Latency(us) 00:06:53.100 Device Information : IOPS MiB/s Average min max 00:06:53.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1607.52 0.78 23858.08 1598.03 1145794.50 00:06:53.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9019.26 4.40 14191.44 2236.58 502654.23 00:06:53.100 ======================================================== 00:06:53.100 Total : 10626.78 5.19 15653.72 1598.03 1145794.50 00:06:53.100 00:06:53.100 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.100 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:53.100 22:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:53.362 true 00:06:53.362 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 646063 00:06:53.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (646063) - No such process 00:06:53.362 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 646063 00:06:53.362 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.622 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:53.883 null0 00:06:53.883 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.883 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.883 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:54.144 null1 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:54.144 null2 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.144 22:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:54.405 null3 00:06:54.405 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.405 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.405 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:54.665 null4 00:06:54.665 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.665 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.665 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:54.665 null5 00:06:54.666 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.666 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.666 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:54.926 null6 00:06:54.926 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:54.927 null7 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:54.927 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 652782 652784 652787 652790 652793 652796 652798 652800 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.189 22:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.450 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.450 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.450 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.450 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.450 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.451 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.712 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.973 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.234 22:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.234 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.496 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.758 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.019 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.281 22:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.281 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.281 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.281 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.542 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.543 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.543 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.543 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.543 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.803 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.803 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.803 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.804 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.065 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.327 22:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.327 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.588 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.588 rmmod nvme_tcp 00:06:58.588 rmmod nvme_fabrics 00:06:58.850 rmmod nvme_keyring 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 645675 ']' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 645675 ']' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 645675' 00:06:58.850 killing process with pid 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 645675 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.850 22:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:01.395 00:07:01.395 real 0m48.894s 00:07:01.395 user 3m11.665s 00:07:01.395 sys 0m16.085s 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.395 ************************************ 00:07:01.395 END TEST nvmf_ns_hotplug_stress 00:07:01.395 ************************************ 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.395 ************************************ 00:07:01.395 START TEST nvmf_delete_subsystem 00:07:01.395 ************************************ 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:01.395 * Looking for test storage... 00:07:01.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.395 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.396 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.396 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.396 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.396 22:55:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:09.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:09.546 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.546 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:09.547 Found net devices under 0000:31:00.0: cvl_0_0 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:09.547 Found net devices under 0000:31:00.1: cvl_0_1 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.547 22:55:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:07:09.547 00:07:09.547 --- 10.0.0.2 ping statistics --- 00:07:09.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.547 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:07:09.547 00:07:09.547 --- 10.0.0.1 ping statistics --- 00:07:09.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.547 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=658428 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 658428 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 658428 ']' 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.547 22:55:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.547 [2024-07-24 22:55:27.268137] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:09.547 [2024-07-24 22:55:27.268202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.808 [2024-07-24 22:55:27.346895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.808 [2024-07-24 22:55:27.421145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.808 [2024-07-24 22:55:27.421189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.808 [2024-07-24 22:55:27.421196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.808 [2024-07-24 22:55:27.421202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.808 [2024-07-24 22:55:27.421208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.808 [2024-07-24 22:55:27.421347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.808 [2024-07-24 22:55:27.421348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 [2024-07-24 22:55:28.084830] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 [2024-07-24 22:55:28.109003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 NULL1 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 Delay0 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=658744 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:10.381 22:55:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:10.641 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.641 [2024-07-24 22:55:28.205631] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:12.553 22:55:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.553 22:55:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.554 22:55:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 starting I/O failed: -6 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 [2024-07-24 22:55:30.430230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829e90 is same with the state(5) to be set 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.815 Write completed with error (sct=0, sc=8) 00:07:12.815 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 starting I/O failed: -6 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 [2024-07-24 22:55:30.431266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda8800d000 is same with the state(5) to be set 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Write completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:12.816 Read completed with error (sct=0, sc=8) 00:07:13.758 [2024-07-24 22:55:31.387824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x808500 is same with the state(5) to be set 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 [2024-07-24 22:55:31.431859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829cb0 is same with the state(5) to be set 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 [2024-07-24 22:55:31.432966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda88000c00 is same with the state(5) to be set 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 [2024-07-24 22:55:31.433083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fda8800d330 is same with the state(5) to be set 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Read completed with error (sct=0, sc=8) 00:07:13.758 Write completed with error (sct=0, sc=8) 00:07:13.758 [2024-07-24 22:55:31.433694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x828d00 is same with the state(5) to be set 00:07:13.758 Initializing NVMe Controllers 00:07:13.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.758 Controller IO queue size 128, less than required. 00:07:13.758 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.758 Initialization complete. Launching workers. 00:07:13.758 ======================================================== 00:07:13.758 Latency(us) 00:07:13.758 Device Information : IOPS MiB/s Average min max 00:07:13.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.07 0.08 893000.76 237.32 1009151.29 00:07:13.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.09 0.08 953586.41 246.72 2001029.17 00:07:13.758 ======================================================== 00:07:13.758 Total : 337.16 0.16 923025.51 237.32 2001029.17 00:07:13.758 00:07:13.758 [2024-07-24 22:55:31.434307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x808500 (9): Bad file descriptor 00:07:13.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:13.758 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.759 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:13.759 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 658744 00:07:13.759 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 658744 00:07:14.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (658744) - No such process 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 658744 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 658744 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 658744 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.328 [2024-07-24 22:55:31.967265] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.328 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=659477 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:14.329 22:55:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.329 [2024-07-24 22:55:32.035366] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:14.899 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.899 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:14.899 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.469 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.469 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:15.469 22:55:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.729 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.729 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:15.729 22:55:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.299 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.299 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:16.299 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:16.875 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:16.875 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:16.875 22:55:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.447 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.447 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:17.447 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:17.708 Initializing NVMe Controllers 00:07:17.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.708 Controller IO queue size 128, less than required. 00:07:17.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:17.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:17.708 Initialization complete. Launching workers. 00:07:17.708 ======================================================== 00:07:17.708 Latency(us) 00:07:17.708 Device Information : IOPS MiB/s Average min max 00:07:17.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002082.53 1000232.68 1006861.08 00:07:17.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003028.88 1000296.75 1009339.79 00:07:17.708 ======================================================== 00:07:17.708 Total : 256.00 0.12 1002555.70 1000232.68 1009339.79 00:07:17.708 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 659477 00:07:17.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (659477) - No such process 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 659477 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.969 rmmod nvme_tcp 00:07:17.969 rmmod nvme_fabrics 00:07:17.969 rmmod nvme_keyring 00:07:17.969 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 658428 ']' 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 658428 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 658428 ']' 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 658428 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658428 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658428' 00:07:17.970 killing process with pid 658428 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 658428 00:07:17.970 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 658428 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.231 22:55:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.148 00:07:20.148 real 0m19.113s 00:07:20.148 user 0m31.205s 00:07:20.148 sys 0m7.139s 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 ************************************ 00:07:20.148 END TEST nvmf_delete_subsystem 00:07:20.148 ************************************ 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.148 ************************************ 00:07:20.148 START TEST nvmf_host_management 00:07:20.148 ************************************ 00:07:20.148 22:55:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.410 * Looking for test storage... 00:07:20.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.410 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.411 22:55:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:28.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:28.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:28.561 Found net devices under 0000:31:00.0: cvl_0_0 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:28.561 Found net devices under 0000:31:00.1: cvl_0_1 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.561 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.562 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.562 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.562 22:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:07:28.562 00:07:28.562 --- 10.0.0.2 ping statistics --- 00:07:28.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.562 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:07:28.562 00:07:28.562 --- 10.0.0.1 ping statistics --- 00:07:28.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.562 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=664843 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 664843 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 664843 ']' 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.562 22:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.562 [2024-07-24 22:55:46.260996] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:28.562 [2024-07-24 22:55:46.261059] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.562 [2024-07-24 22:55:46.331431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.824 [2024-07-24 22:55:46.399187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.824 [2024-07-24 22:55:46.399223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.824 [2024-07-24 22:55:46.399228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.824 [2024-07-24 22:55:46.399233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.824 [2024-07-24 22:55:46.399237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.824 [2024-07-24 22:55:46.399337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.824 [2024-07-24 22:55:46.399501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.824 [2024-07-24 22:55:46.399665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.824 [2024-07-24 22:55:46.399667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 [2024-07-24 22:55:47.117869] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 Malloc0 00:07:29.460 [2024-07-24 22:55:47.178686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=665217 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 665217 /var/tmp/bdevperf.sock 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 665217 ']' 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:29.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:29.460 { 00:07:29.460 "params": { 00:07:29.460 "name": "Nvme$subsystem", 00:07:29.460 "trtype": "$TEST_TRANSPORT", 00:07:29.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:29.460 "adrfam": "ipv4", 00:07:29.460 "trsvcid": "$NVMF_PORT", 00:07:29.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:29.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:29.460 "hdgst": ${hdgst:-false}, 00:07:29.460 "ddgst": ${ddgst:-false} 00:07:29.460 }, 00:07:29.460 "method": "bdev_nvme_attach_controller" 00:07:29.460 } 00:07:29.460 EOF 00:07:29.460 )") 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:29.460 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:29.721 22:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:29.721 "params": { 00:07:29.721 "name": "Nvme0", 00:07:29.721 "trtype": "tcp", 00:07:29.721 "traddr": "10.0.0.2", 00:07:29.721 "adrfam": "ipv4", 00:07:29.721 "trsvcid": "4420", 00:07:29.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:29.721 "hdgst": false, 00:07:29.721 "ddgst": false 00:07:29.721 }, 00:07:29.721 "method": "bdev_nvme_attach_controller" 00:07:29.721 }' 00:07:29.721 [2024-07-24 22:55:47.281510] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:29.721 [2024-07-24 22:55:47.281561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665217 ] 00:07:29.721 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.721 [2024-07-24 22:55:47.347818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.721 [2024-07-24 22:55:47.412293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.982 Running I/O for 10 seconds... 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.555 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.555 [2024-07-24 22:55:48.133695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc469a0 is same with the state(5) to be set 00:07:30.555 [2024-07-24 22:55:48.133744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc469a0 is same with the state(5) to be set 00:07:30.555 [2024-07-24 22:55:48.134253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.555 [2024-07-24 22:55:48.134290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.555 [2024-07-24 22:55:48.134306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.555 [2024-07-24 22:55:48.134314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.555 [2024-07-24 22:55:48.134324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.555 [2024-07-24 22:55:48.134332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.555 [2024-07-24 22:55:48.134341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.555 [2024-07-24 22:55:48.134348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.555 [2024-07-24 22:55:48.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.555 [2024-07-24 22:55:48.134373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.556 [2024-07-24 22:55:48.134966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.556 [2024-07-24 22:55:48.134973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.134983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.134990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.557 [2024-07-24 22:55:48.135355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.135363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2031b00 is same with the state(5) to be set 00:07:30.557 [2024-07-24 22:55:48.135403] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2031b00 was disconnected and freed. reset controller. 00:07:30.557 [2024-07-24 22:55:48.136641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:30.557 task offset: 88064 on job bdev=Nvme0n1 fails 00:07:30.557 00:07:30.557 Latency(us) 00:07:30.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.557 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:30.557 Job: Nvme0n1 ended in about 0.54 seconds with error 00:07:30.557 Verification LBA range: start 0x0 length 0x400 00:07:30.557 Nvme0n1 : 0.54 1186.22 74.14 118.62 0.00 47876.95 1576.96 40195.41 00:07:30.557 =================================================================================================================== 00:07:30.557 Total : 1186.22 74.14 118.62 0.00 47876.95 1576.96 40195.41 00:07:30.557 [2024-07-24 22:55:48.138667] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.557 [2024-07-24 22:55:48.138689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c00540 (9): Bad file descriptor 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.557 [2024-07-24 22:55:48.141455] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:30.557 [2024-07-24 22:55:48.141549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:30.557 [2024-07-24 22:55:48.141572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:30.557 [2024-07-24 22:55:48.141587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:30.557 [2024-07-24 22:55:48.141595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:30.557 [2024-07-24 22:55:48.141603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:30.557 [2024-07-24 22:55:48.141609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c00540 00:07:30.557 [2024-07-24 22:55:48.141629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c00540 (9): Bad file descriptor 00:07:30.557 [2024-07-24 22:55:48.141641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:30.557 [2024-07-24 22:55:48.141648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:30.557 [2024-07-24 22:55:48.141657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:30.557 [2024-07-24 22:55:48.141669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.557 22:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 665217 00:07:31.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (665217) - No such process 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:31.500 { 00:07:31.500 "params": { 00:07:31.500 "name": "Nvme$subsystem", 00:07:31.500 "trtype": "$TEST_TRANSPORT", 00:07:31.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.500 "adrfam": "ipv4", 00:07:31.500 "trsvcid": "$NVMF_PORT", 00:07:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.500 "hdgst": ${hdgst:-false}, 00:07:31.500 "ddgst": ${ddgst:-false} 00:07:31.500 }, 00:07:31.500 "method": "bdev_nvme_attach_controller" 00:07:31.500 } 00:07:31.500 EOF 00:07:31.500 )") 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:31.500 22:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:31.500 "params": { 00:07:31.500 "name": "Nvme0", 00:07:31.500 "trtype": "tcp", 00:07:31.500 "traddr": "10.0.0.2", 00:07:31.500 "adrfam": "ipv4", 00:07:31.500 "trsvcid": "4420", 00:07:31.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:31.500 "hdgst": false, 00:07:31.500 "ddgst": false 00:07:31.500 }, 00:07:31.500 "method": "bdev_nvme_attach_controller" 00:07:31.500 }' 00:07:31.500 [2024-07-24 22:55:49.209508] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:31.500 [2024-07-24 22:55:49.209562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665570 ] 00:07:31.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.500 [2024-07-24 22:55:49.274791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.761 [2024-07-24 22:55:49.339089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.761 Running I/O for 1 seconds... 00:07:33.146 00:07:33.146 Latency(us) 00:07:33.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:33.146 Verification LBA range: start 0x0 length 0x400 00:07:33.146 Nvme0n1 : 1.03 1244.63 77.79 0.00 0.00 50613.61 13107.20 40850.77 00:07:33.146 =================================================================================================================== 00:07:33.146 Total : 1244.63 77.79 0.00 0.00 50613.61 13107.20 40850.77 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.146 rmmod nvme_tcp 00:07:33.146 rmmod nvme_fabrics 00:07:33.146 rmmod nvme_keyring 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 664843 ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 664843 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 664843 ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 664843 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 664843 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 664843' 00:07:33.146 killing process with pid 664843 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 664843 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 664843 00:07:33.146 [2024-07-24 22:55:50.895238] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.146 22:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.695 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.695 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:35.695 00:07:35.695 real 0m15.059s 00:07:35.695 user 0m22.834s 00:07:35.695 sys 0m6.940s 00:07:35.695 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.695 22:55:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.695 ************************************ 00:07:35.695 END TEST nvmf_host_management 00:07:35.695 ************************************ 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.695 ************************************ 00:07:35.695 START TEST nvmf_lvol 00:07:35.695 ************************************ 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:35.695 * Looking for test storage... 00:07:35.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.695 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.696 22:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.845 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:43.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:43.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:43.846 Found net devices under 0000:31:00.0: cvl_0_0 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:43.846 Found net devices under 0000:31:00.1: cvl_0_1 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.783 ms 00:07:43.846 00:07:43.846 --- 10.0.0.2 ping statistics --- 00:07:43.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.846 rtt min/avg/max/mdev = 0.783/0.783/0.783/0.000 ms 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.454 ms 00:07:43.846 00:07:43.846 --- 10.0.0.1 ping statistics --- 00:07:43.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.846 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=670597 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 670597 00:07:43.846 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 670597 ']' 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.847 22:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.847 [2024-07-24 22:56:01.567601] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:07:43.847 [2024-07-24 22:56:01.567661] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.107 [2024-07-24 22:56:01.644646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.108 [2024-07-24 22:56:01.719189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.108 [2024-07-24 22:56:01.719223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.108 [2024-07-24 22:56:01.719231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.108 [2024-07-24 22:56:01.719237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.108 [2024-07-24 22:56:01.719243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.108 [2024-07-24 22:56:01.719399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.108 [2024-07-24 22:56:01.719513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.108 [2024-07-24 22:56:01.719516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.679 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:44.939 [2024-07-24 22:56:02.531907] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.939 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:45.199 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:45.199 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:45.199 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:45.199 22:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:45.459 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:45.718 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=90a492b9-ba8a-4248-866d-6b11b084eff4 00:07:45.718 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 90a492b9-ba8a-4248-866d-6b11b084eff4 lvol 20 00:07:45.718 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=20425f4c-9f55-4c0c-b3c3-d4dc49c327c2 00:07:45.718 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:45.978 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20425f4c-9f55-4c0c-b3c3-d4dc49c327c2 00:07:45.978 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.238 [2024-07-24 22:56:03.885144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.238 22:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.497 22:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=671279 00:07:46.497 22:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:46.497 22:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:46.497 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.438 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 20425f4c-9f55-4c0c-b3c3-d4dc49c327c2 MY_SNAPSHOT 00:07:47.699 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4d649412-ff94-4773-8c34-c53d864e9219 00:07:47.699 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 20425f4c-9f55-4c0c-b3c3-d4dc49c327c2 30 00:07:47.699 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4d649412-ff94-4773-8c34-c53d864e9219 MY_CLONE 00:07:47.960 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cd46a946-820e-48fd-8c0f-44faaa287254 00:07:47.960 22:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cd46a946-820e-48fd-8c0f-44faaa287254 00:07:48.530 22:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 671279 00:07:56.667 Initializing NVMe Controllers 00:07:56.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:56.667 Controller IO queue size 128, less than required. 00:07:56.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:56.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:56.667 Initialization complete. Launching workers. 00:07:56.667 ======================================================== 00:07:56.667 Latency(us) 00:07:56.667 Device Information : IOPS MiB/s Average min max 00:07:56.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12508.10 48.86 10239.24 1501.41 53400.32 00:07:56.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18080.00 70.62 7081.88 527.38 43925.81 00:07:56.667 ======================================================== 00:07:56.667 Total : 30588.10 119.48 8372.99 527.38 53400.32 00:07:56.667 00:07:56.667 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.928 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20425f4c-9f55-4c0c-b3c3-d4dc49c327c2 00:07:56.928 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90a492b9-ba8a-4248-866d-6b11b084eff4 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.189 rmmod nvme_tcp 00:07:57.189 rmmod nvme_fabrics 00:07:57.189 rmmod nvme_keyring 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 670597 ']' 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 670597 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 670597 ']' 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 670597 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670597 00:07:57.189 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.190 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.190 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670597' 00:07:57.190 killing process with pid 670597 00:07:57.190 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 670597 00:07:57.190 22:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 670597 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.450 22:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.427 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.427 00:07:59.427 real 0m24.124s 00:07:59.427 user 1m3.680s 00:07:59.427 sys 0m8.409s 00:07:59.427 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.427 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.427 ************************************ 00:07:59.427 END TEST nvmf_lvol 00:07:59.427 ************************************ 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.689 ************************************ 00:07:59.689 START TEST nvmf_lvs_grow 00:07:59.689 ************************************ 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.689 * Looking for test storage... 00:07:59.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.689 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.690 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.690 22:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.861 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.862 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.862 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.862 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:08.123 00:08:08.123 --- 10.0.0.2 ping statistics --- 00:08:08.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.123 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:08.123 00:08:08.123 --- 10.0.0.1 ping statistics --- 00:08:08.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.123 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=678026 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 678026 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.123 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 678026 ']' 00:08:08.124 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.124 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.124 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.124 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.124 22:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.124 [2024-07-24 22:56:25.838689] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:08.124 [2024-07-24 22:56:25.838791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.384 [2024-07-24 22:56:25.918253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.384 [2024-07-24 22:56:25.992376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.384 [2024-07-24 22:56:25.992417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.384 [2024-07-24 22:56:25.992425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.384 [2024-07-24 22:56:25.992431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.384 [2024-07-24 22:56:25.992437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.384 [2024-07-24 22:56:25.992455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.955 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.216 [2024-07-24 22:56:26.784096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.216 ************************************ 00:08:09.216 START TEST lvs_grow_clean 00:08:09.216 ************************************ 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.216 22:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.476 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.476 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:09.476 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:09.476 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:09.476 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:09.737 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:09.737 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:09.737 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 lvol 150 00:08:09.998 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5324a9a2-59ef-42c7-bbc9-8902d39d5f83 00:08:09.998 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.998 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:09.998 [2024-07-24 22:56:27.672389] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:09.998 [2024-07-24 22:56:27.672440] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.998 true 00:08:09.998 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.998 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:10.259 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.259 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.259 22:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5324a9a2-59ef-42c7-bbc9-8902d39d5f83 00:08:10.519 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:10.519 [2024-07-24 22:56:28.274288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.519 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=678715 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 678715 /var/tmp/bdevperf.sock 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 678715 ']' 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.780 22:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.780 [2024-07-24 22:56:28.492626] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:10.780 [2024-07-24 22:56:28.492675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678715 ] 00:08:10.780 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.040 [2024-07-24 22:56:28.576053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.040 [2024-07-24 22:56:28.640344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.611 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.611 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:11.611 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.872 Nvme0n1 00:08:12.133 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:12.133 [ 00:08:12.133 { 00:08:12.133 "name": "Nvme0n1", 00:08:12.133 "aliases": [ 00:08:12.133 "5324a9a2-59ef-42c7-bbc9-8902d39d5f83" 00:08:12.133 ], 00:08:12.133 "product_name": "NVMe disk", 00:08:12.133 "block_size": 4096, 00:08:12.133 "num_blocks": 38912, 00:08:12.133 "uuid": "5324a9a2-59ef-42c7-bbc9-8902d39d5f83", 00:08:12.133 "assigned_rate_limits": { 00:08:12.133 "rw_ios_per_sec": 0, 00:08:12.133 "rw_mbytes_per_sec": 0, 00:08:12.133 "r_mbytes_per_sec": 0, 00:08:12.133 "w_mbytes_per_sec": 0 00:08:12.133 }, 00:08:12.133 "claimed": false, 00:08:12.133 "zoned": false, 00:08:12.133 "supported_io_types": { 00:08:12.133 "read": true, 00:08:12.133 "write": true, 00:08:12.133 "unmap": true, 00:08:12.133 "flush": true, 00:08:12.133 "reset": true, 00:08:12.133 "nvme_admin": true, 00:08:12.133 "nvme_io": true, 00:08:12.133 "nvme_io_md": false, 00:08:12.133 "write_zeroes": true, 00:08:12.133 "zcopy": false, 00:08:12.133 "get_zone_info": false, 00:08:12.133 "zone_management": false, 00:08:12.133 "zone_append": false, 00:08:12.133 "compare": true, 00:08:12.133 "compare_and_write": true, 00:08:12.133 "abort": true, 00:08:12.133 "seek_hole": false, 00:08:12.133 "seek_data": false, 00:08:12.133 "copy": true, 00:08:12.133 "nvme_iov_md": false 00:08:12.133 }, 00:08:12.133 "memory_domains": [ 00:08:12.133 { 00:08:12.133 "dma_device_id": "system", 00:08:12.133 "dma_device_type": 1 00:08:12.133 } 00:08:12.133 ], 00:08:12.133 "driver_specific": { 00:08:12.133 "nvme": [ 00:08:12.133 { 00:08:12.133 "trid": { 00:08:12.133 "trtype": "TCP", 00:08:12.133 "adrfam": "IPv4", 00:08:12.133 "traddr": "10.0.0.2", 00:08:12.133 "trsvcid": "4420", 00:08:12.133 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:12.133 }, 00:08:12.133 "ctrlr_data": { 00:08:12.133 "cntlid": 1, 00:08:12.133 "vendor_id": "0x8086", 00:08:12.133 "model_number": "SPDK bdev Controller", 00:08:12.133 "serial_number": "SPDK0", 00:08:12.133 "firmware_revision": "24.09", 00:08:12.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.133 "oacs": { 00:08:12.133 "security": 0, 00:08:12.133 "format": 0, 00:08:12.133 "firmware": 0, 00:08:12.133 "ns_manage": 0 00:08:12.133 }, 00:08:12.133 "multi_ctrlr": true, 00:08:12.133 "ana_reporting": false 00:08:12.133 }, 00:08:12.133 "vs": { 00:08:12.133 "nvme_version": "1.3" 00:08:12.133 }, 00:08:12.133 "ns_data": { 00:08:12.133 "id": 1, 00:08:12.133 "can_share": true 00:08:12.133 } 00:08:12.133 } 00:08:12.133 ], 00:08:12.133 "mp_policy": "active_passive" 00:08:12.133 } 00:08:12.133 } 00:08:12.133 ] 00:08:12.133 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=679057 00:08:12.133 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:12.133 22:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.133 Running I/O for 10 seconds... 00:08:13.519 Latency(us) 00:08:13.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.519 Nvme0n1 : 1.00 18074.00 70.60 0.00 0.00 0.00 0.00 0.00 00:08:13.519 =================================================================================================================== 00:08:13.519 Total : 18074.00 70.60 0.00 0.00 0.00 0.00 0.00 00:08:13.519 00:08:14.090 22:56:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:14.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.351 Nvme0n1 : 2.00 18220.50 71.17 0.00 0.00 0.00 0.00 0.00 00:08:14.351 =================================================================================================================== 00:08:14.351 Total : 18220.50 71.17 0.00 0.00 0.00 0.00 0.00 00:08:14.351 00:08:14.351 true 00:08:14.351 22:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:14.351 22:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:14.612 22:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:14.612 22:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:14.612 22:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 679057 00:08:15.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.182 Nvme0n1 : 3.00 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:08:15.182 =================================================================================================================== 00:08:15.182 Total : 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:08:15.182 00:08:16.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.570 Nvme0n1 : 4.00 18310.50 71.53 0.00 0.00 0.00 0.00 0.00 00:08:16.570 =================================================================================================================== 00:08:16.570 Total : 18310.50 71.53 0.00 0.00 0.00 0.00 0.00 00:08:16.570 00:08:17.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.141 Nvme0n1 : 5.00 18334.60 71.62 0.00 0.00 0.00 0.00 0.00 00:08:17.141 =================================================================================================================== 00:08:17.141 Total : 18334.60 71.62 0.00 0.00 0.00 0.00 0.00 00:08:17.141 00:08:18.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.524 Nvme0n1 : 6.00 18351.00 71.68 0.00 0.00 0.00 0.00 0.00 00:08:18.524 =================================================================================================================== 00:08:18.524 Total : 18351.00 71.68 0.00 0.00 0.00 0.00 0.00 00:08:18.524 00:08:19.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.466 Nvme0n1 : 7.00 18362.57 71.73 0.00 0.00 0.00 0.00 0.00 00:08:19.466 =================================================================================================================== 00:08:19.466 Total : 18362.57 71.73 0.00 0.00 0.00 0.00 0.00 00:08:19.466 00:08:20.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.408 Nvme0n1 : 8.00 18379.00 71.79 0.00 0.00 0.00 0.00 0.00 00:08:20.408 =================================================================================================================== 00:08:20.408 Total : 18379.00 71.79 0.00 0.00 0.00 0.00 0.00 00:08:20.408 00:08:21.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.346 Nvme0n1 : 9.00 18392.11 71.84 0.00 0.00 0.00 0.00 0.00 00:08:21.346 =================================================================================================================== 00:08:21.346 Total : 18392.11 71.84 0.00 0.00 0.00 0.00 0.00 00:08:21.346 00:08:22.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.287 Nvme0n1 : 10.00 18396.10 71.86 0.00 0.00 0.00 0.00 0.00 00:08:22.287 =================================================================================================================== 00:08:22.287 Total : 18396.10 71.86 0.00 0.00 0.00 0.00 0.00 00:08:22.287 00:08:22.287 00:08:22.287 Latency(us) 00:08:22.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.288 Nvme0n1 : 10.01 18399.42 71.87 0.00 0.00 6953.47 4423.68 12943.36 00:08:22.288 =================================================================================================================== 00:08:22.288 Total : 18399.42 71.87 0.00 0.00 6953.47 4423.68 12943.36 00:08:22.288 0 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 678715 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 678715 ']' 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 678715 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.288 22:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 678715 00:08:22.288 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:22.288 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:22.288 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 678715' 00:08:22.288 killing process with pid 678715 00:08:22.288 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 678715 00:08:22.288 Received shutdown signal, test time was about 10.000000 seconds 00:08:22.288 00:08:22.288 Latency(us) 00:08:22.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.288 =================================================================================================================== 00:08:22.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:22.288 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 678715 00:08:22.549 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.549 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.810 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:22.810 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.070 [2024-07-24 22:56:40.753916] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.070 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:23.331 request: 00:08:23.331 { 00:08:23.331 "uuid": "09bbae1a-3e34-427d-8d7a-3d5b10c403b7", 00:08:23.331 "method": "bdev_lvol_get_lvstores", 00:08:23.331 "req_id": 1 00:08:23.331 } 00:08:23.331 Got JSON-RPC error response 00:08:23.331 response: 00:08:23.331 { 00:08:23.331 "code": -19, 00:08:23.331 "message": "No such device" 00:08:23.331 } 00:08:23.331 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:23.331 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.331 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.331 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.331 22:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.331 aio_bdev 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5324a9a2-59ef-42c7-bbc9-8902d39d5f83 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5324a9a2-59ef-42c7-bbc9-8902d39d5f83 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.331 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.592 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5324a9a2-59ef-42c7-bbc9-8902d39d5f83 -t 2000 00:08:23.853 [ 00:08:23.853 { 00:08:23.853 "name": "5324a9a2-59ef-42c7-bbc9-8902d39d5f83", 00:08:23.853 "aliases": [ 00:08:23.853 "lvs/lvol" 00:08:23.853 ], 00:08:23.853 "product_name": "Logical Volume", 00:08:23.853 "block_size": 4096, 00:08:23.853 "num_blocks": 38912, 00:08:23.853 "uuid": "5324a9a2-59ef-42c7-bbc9-8902d39d5f83", 00:08:23.853 "assigned_rate_limits": { 00:08:23.853 "rw_ios_per_sec": 0, 00:08:23.853 "rw_mbytes_per_sec": 0, 00:08:23.853 "r_mbytes_per_sec": 0, 00:08:23.853 "w_mbytes_per_sec": 0 00:08:23.853 }, 00:08:23.853 "claimed": false, 00:08:23.853 "zoned": false, 00:08:23.853 "supported_io_types": { 00:08:23.853 "read": true, 00:08:23.853 "write": true, 00:08:23.853 "unmap": true, 00:08:23.853 "flush": false, 00:08:23.853 "reset": true, 00:08:23.853 "nvme_admin": false, 00:08:23.853 "nvme_io": false, 00:08:23.853 "nvme_io_md": false, 00:08:23.853 "write_zeroes": true, 00:08:23.853 "zcopy": false, 00:08:23.853 "get_zone_info": false, 00:08:23.853 "zone_management": false, 00:08:23.853 "zone_append": false, 00:08:23.853 "compare": false, 00:08:23.853 "compare_and_write": false, 00:08:23.853 "abort": false, 00:08:23.853 "seek_hole": true, 00:08:23.853 "seek_data": true, 00:08:23.853 "copy": false, 00:08:23.853 "nvme_iov_md": false 00:08:23.853 }, 00:08:23.853 "driver_specific": { 00:08:23.853 "lvol": { 00:08:23.853 "lvol_store_uuid": "09bbae1a-3e34-427d-8d7a-3d5b10c403b7", 00:08:23.853 "base_bdev": "aio_bdev", 00:08:23.853 "thin_provision": false, 00:08:23.853 "num_allocated_clusters": 38, 00:08:23.853 "snapshot": false, 00:08:23.853 "clone": false, 00:08:23.853 "esnap_clone": false 00:08:23.853 } 00:08:23.853 } 00:08:23.853 } 00:08:23.853 ] 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:23.853 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:24.114 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:24.114 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5324a9a2-59ef-42c7-bbc9-8902d39d5f83 00:08:24.114 22:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09bbae1a-3e34-427d-8d7a-3d5b10c403b7 00:08:24.374 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.634 00:08:24.634 real 0m15.389s 00:08:24.634 user 0m15.010s 00:08:24.634 sys 0m1.365s 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 ************************************ 00:08:24.634 END TEST lvs_grow_clean 00:08:24.634 ************************************ 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 ************************************ 00:08:24.634 START TEST lvs_grow_dirty 00:08:24.634 ************************************ 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.634 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.894 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.894 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3794efdc-0b58-443a-aa37-008b39172ad6 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.154 22:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3794efdc-0b58-443a-aa37-008b39172ad6 lvol 150 00:08:25.440 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:25.440 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.440 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.440 [2024-07-24 22:56:43.164349] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.440 [2024-07-24 22:56:43.164399] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.440 true 00:08:25.440 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:25.440 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:25.770 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.770 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.770 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:26.035 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.035 [2024-07-24 22:56:43.782219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.035 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=681813 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 681813 /var/tmp/bdevperf.sock 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 681813 ']' 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.296 22:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.296 [2024-07-24 22:56:44.000978] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:26.296 [2024-07-24 22:56:44.001027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681813 ] 00:08:26.296 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.296 [2024-07-24 22:56:44.082048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.557 [2024-07-24 22:56:44.135818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.129 22:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.129 22:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:27.129 22:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.390 Nvme0n1 00:08:27.390 22:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.390 [ 00:08:27.390 { 00:08:27.390 "name": "Nvme0n1", 00:08:27.390 "aliases": [ 00:08:27.390 "2f199144-c1f2-4fea-ad56-5f4f8a922a8c" 00:08:27.390 ], 00:08:27.390 "product_name": "NVMe disk", 00:08:27.390 "block_size": 4096, 00:08:27.390 "num_blocks": 38912, 00:08:27.390 "uuid": "2f199144-c1f2-4fea-ad56-5f4f8a922a8c", 00:08:27.390 "assigned_rate_limits": { 00:08:27.390 "rw_ios_per_sec": 0, 00:08:27.390 "rw_mbytes_per_sec": 0, 00:08:27.390 "r_mbytes_per_sec": 0, 00:08:27.390 "w_mbytes_per_sec": 0 00:08:27.390 }, 00:08:27.390 "claimed": false, 00:08:27.390 "zoned": false, 00:08:27.390 "supported_io_types": { 00:08:27.390 "read": true, 00:08:27.390 "write": true, 00:08:27.390 "unmap": true, 00:08:27.390 "flush": true, 00:08:27.390 "reset": true, 00:08:27.390 "nvme_admin": true, 00:08:27.390 "nvme_io": true, 00:08:27.390 "nvme_io_md": false, 00:08:27.390 "write_zeroes": true, 00:08:27.390 "zcopy": false, 00:08:27.390 "get_zone_info": false, 00:08:27.390 "zone_management": false, 00:08:27.390 "zone_append": false, 00:08:27.390 "compare": true, 00:08:27.390 "compare_and_write": true, 00:08:27.390 "abort": true, 00:08:27.390 "seek_hole": false, 00:08:27.390 "seek_data": false, 00:08:27.390 "copy": true, 00:08:27.390 "nvme_iov_md": false 00:08:27.390 }, 00:08:27.390 "memory_domains": [ 00:08:27.390 { 00:08:27.390 "dma_device_id": "system", 00:08:27.390 "dma_device_type": 1 00:08:27.390 } 00:08:27.390 ], 00:08:27.390 "driver_specific": { 00:08:27.390 "nvme": [ 00:08:27.390 { 00:08:27.390 "trid": { 00:08:27.390 "trtype": "TCP", 00:08:27.390 "adrfam": "IPv4", 00:08:27.390 "traddr": "10.0.0.2", 00:08:27.390 "trsvcid": "4420", 00:08:27.390 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.390 }, 00:08:27.390 "ctrlr_data": { 00:08:27.390 "cntlid": 1, 00:08:27.390 "vendor_id": "0x8086", 00:08:27.390 "model_number": "SPDK bdev Controller", 00:08:27.390 "serial_number": "SPDK0", 00:08:27.390 "firmware_revision": "24.09", 00:08:27.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.390 "oacs": { 00:08:27.390 "security": 0, 00:08:27.390 "format": 0, 00:08:27.390 "firmware": 0, 00:08:27.390 "ns_manage": 0 00:08:27.390 }, 00:08:27.390 "multi_ctrlr": true, 00:08:27.390 "ana_reporting": false 00:08:27.390 }, 00:08:27.390 "vs": { 00:08:27.390 "nvme_version": "1.3" 00:08:27.390 }, 00:08:27.390 "ns_data": { 00:08:27.390 "id": 1, 00:08:27.390 "can_share": true 00:08:27.390 } 00:08:27.390 } 00:08:27.390 ], 00:08:27.390 "mp_policy": "active_passive" 00:08:27.390 } 00:08:27.390 } 00:08:27.390 ] 00:08:27.390 22:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=682150 00:08:27.390 22:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.390 22:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.651 Running I/O for 10 seconds... 00:08:28.591 Latency(us) 00:08:28.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.591 Nvme0n1 : 1.00 18250.00 71.29 0.00 0.00 0.00 0.00 0.00 00:08:28.591 =================================================================================================================== 00:08:28.591 Total : 18250.00 71.29 0.00 0.00 0.00 0.00 0.00 00:08:28.591 00:08:29.533 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:29.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.533 Nvme0n1 : 2.00 18305.50 71.51 0.00 0.00 0.00 0.00 0.00 00:08:29.533 =================================================================================================================== 00:08:29.533 Total : 18305.50 71.51 0.00 0.00 0.00 0.00 0.00 00:08:29.533 00:08:29.533 true 00:08:29.533 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:29.533 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.794 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.794 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.794 22:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 682150 00:08:30.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.736 Nvme0n1 : 3.00 18347.67 71.67 0.00 0.00 0.00 0.00 0.00 00:08:30.736 =================================================================================================================== 00:08:30.736 Total : 18347.67 71.67 0.00 0.00 0.00 0.00 0.00 00:08:30.736 00:08:31.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.678 Nvme0n1 : 4.00 18370.25 71.76 0.00 0.00 0.00 0.00 0.00 00:08:31.678 =================================================================================================================== 00:08:31.678 Total : 18370.25 71.76 0.00 0.00 0.00 0.00 0.00 00:08:31.678 00:08:32.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.620 Nvme0n1 : 5.00 18343.80 71.66 0.00 0.00 0.00 0.00 0.00 00:08:32.620 =================================================================================================================== 00:08:32.621 Total : 18343.80 71.66 0.00 0.00 0.00 0.00 0.00 00:08:32.621 00:08:33.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.562 Nvme0n1 : 6.00 18348.00 71.67 0.00 0.00 0.00 0.00 0.00 00:08:33.562 =================================================================================================================== 00:08:33.562 Total : 18348.00 71.67 0.00 0.00 0.00 0.00 0.00 00:08:33.562 00:08:34.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.504 Nvme0n1 : 7.00 18360.00 71.72 0.00 0.00 0.00 0.00 0.00 00:08:34.504 =================================================================================================================== 00:08:34.504 Total : 18360.00 71.72 0.00 0.00 0.00 0.00 0.00 00:08:34.504 00:08:35.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.446 Nvme0n1 : 8.00 18376.88 71.78 0.00 0.00 0.00 0.00 0.00 00:08:35.446 =================================================================================================================== 00:08:35.446 Total : 18376.88 71.78 0.00 0.00 0.00 0.00 0.00 00:08:35.446 00:08:36.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.852 Nvme0n1 : 9.00 18383.00 71.81 0.00 0.00 0.00 0.00 0.00 00:08:36.852 =================================================================================================================== 00:08:36.852 Total : 18383.00 71.81 0.00 0.00 0.00 0.00 0.00 00:08:36.852 00:08:37.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.795 Nvme0n1 : 10.00 18394.40 71.85 0.00 0.00 0.00 0.00 0.00 00:08:37.795 =================================================================================================================== 00:08:37.795 Total : 18394.40 71.85 0.00 0.00 0.00 0.00 0.00 00:08:37.795 00:08:37.795 00:08:37.795 Latency(us) 00:08:37.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.795 Nvme0n1 : 10.01 18394.29 71.85 0.00 0.00 6955.12 1611.09 12943.36 00:08:37.795 =================================================================================================================== 00:08:37.795 Total : 18394.29 71.85 0.00 0.00 6955.12 1611.09 12943.36 00:08:37.795 0 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 681813 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 681813 ']' 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 681813 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 681813 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 681813' 00:08:37.795 killing process with pid 681813 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 681813 00:08:37.795 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.795 00:08:37.795 Latency(us) 00:08:37.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.795 =================================================================================================================== 00:08:37.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 681813 00:08:37.795 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.057 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.057 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.057 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 678026 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 678026 00:08:38.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 678026 Killed "${NVMF_APP[@]}" "$@" 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=684176 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 684176 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 684176 ']' 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.318 22:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.318 [2024-07-24 22:56:56.033613] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:38.318 [2024-07-24 22:56:56.033669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.318 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.579 [2024-07-24 22:56:56.106763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.579 [2024-07-24 22:56:56.171032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.579 [2024-07-24 22:56:56.171065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.579 [2024-07-24 22:56:56.171073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.579 [2024-07-24 22:56:56.171079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.579 [2024-07-24 22:56:56.171085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.579 [2024-07-24 22:56:56.171108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.151 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.412 [2024-07-24 22:56:56.976088] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:39.412 [2024-07-24 22:56:56.976178] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:39.412 [2024-07-24 22:56:56.976208] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.412 22:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.412 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2f199144-c1f2-4fea-ad56-5f4f8a922a8c -t 2000 00:08:39.673 [ 00:08:39.673 { 00:08:39.673 "name": "2f199144-c1f2-4fea-ad56-5f4f8a922a8c", 00:08:39.673 "aliases": [ 00:08:39.673 "lvs/lvol" 00:08:39.673 ], 00:08:39.673 "product_name": "Logical Volume", 00:08:39.673 "block_size": 4096, 00:08:39.673 "num_blocks": 38912, 00:08:39.673 "uuid": "2f199144-c1f2-4fea-ad56-5f4f8a922a8c", 00:08:39.673 "assigned_rate_limits": { 00:08:39.673 "rw_ios_per_sec": 0, 00:08:39.673 "rw_mbytes_per_sec": 0, 00:08:39.673 "r_mbytes_per_sec": 0, 00:08:39.673 "w_mbytes_per_sec": 0 00:08:39.673 }, 00:08:39.673 "claimed": false, 00:08:39.673 "zoned": false, 00:08:39.673 "supported_io_types": { 00:08:39.673 "read": true, 00:08:39.673 "write": true, 00:08:39.673 "unmap": true, 00:08:39.673 "flush": false, 00:08:39.673 "reset": true, 00:08:39.673 "nvme_admin": false, 00:08:39.673 "nvme_io": false, 00:08:39.673 "nvme_io_md": false, 00:08:39.673 "write_zeroes": true, 00:08:39.673 "zcopy": false, 00:08:39.673 "get_zone_info": false, 00:08:39.673 "zone_management": false, 00:08:39.673 "zone_append": false, 00:08:39.673 "compare": false, 00:08:39.673 "compare_and_write": false, 00:08:39.673 "abort": false, 00:08:39.673 "seek_hole": true, 00:08:39.673 "seek_data": true, 00:08:39.673 "copy": false, 00:08:39.673 "nvme_iov_md": false 00:08:39.673 }, 00:08:39.673 "driver_specific": { 00:08:39.673 "lvol": { 00:08:39.673 "lvol_store_uuid": "3794efdc-0b58-443a-aa37-008b39172ad6", 00:08:39.673 "base_bdev": "aio_bdev", 00:08:39.673 "thin_provision": false, 00:08:39.673 "num_allocated_clusters": 38, 00:08:39.673 "snapshot": false, 00:08:39.673 "clone": false, 00:08:39.673 "esnap_clone": false 00:08:39.673 } 00:08:39.673 } 00:08:39.673 } 00:08:39.673 ] 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:39.673 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:39.933 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:39.933 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.194 [2024-07-24 22:56:57.743982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:40.194 request: 00:08:40.194 { 00:08:40.194 "uuid": "3794efdc-0b58-443a-aa37-008b39172ad6", 00:08:40.194 "method": "bdev_lvol_get_lvstores", 00:08:40.194 "req_id": 1 00:08:40.194 } 00:08:40.194 Got JSON-RPC error response 00:08:40.194 response: 00:08:40.194 { 00:08:40.194 "code": -19, 00:08:40.194 "message": "No such device" 00:08:40.194 } 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.194 22:56:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.455 aio_bdev 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.455 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.716 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2f199144-c1f2-4fea-ad56-5f4f8a922a8c -t 2000 00:08:40.716 [ 00:08:40.716 { 00:08:40.716 "name": "2f199144-c1f2-4fea-ad56-5f4f8a922a8c", 00:08:40.716 "aliases": [ 00:08:40.716 "lvs/lvol" 00:08:40.716 ], 00:08:40.716 "product_name": "Logical Volume", 00:08:40.716 "block_size": 4096, 00:08:40.716 "num_blocks": 38912, 00:08:40.716 "uuid": "2f199144-c1f2-4fea-ad56-5f4f8a922a8c", 00:08:40.716 "assigned_rate_limits": { 00:08:40.716 "rw_ios_per_sec": 0, 00:08:40.716 "rw_mbytes_per_sec": 0, 00:08:40.716 "r_mbytes_per_sec": 0, 00:08:40.716 "w_mbytes_per_sec": 0 00:08:40.716 }, 00:08:40.716 "claimed": false, 00:08:40.716 "zoned": false, 00:08:40.716 "supported_io_types": { 00:08:40.716 "read": true, 00:08:40.716 "write": true, 00:08:40.716 "unmap": true, 00:08:40.716 "flush": false, 00:08:40.716 "reset": true, 00:08:40.716 "nvme_admin": false, 00:08:40.716 "nvme_io": false, 00:08:40.716 "nvme_io_md": false, 00:08:40.716 "write_zeroes": true, 00:08:40.716 "zcopy": false, 00:08:40.716 "get_zone_info": false, 00:08:40.716 "zone_management": false, 00:08:40.716 "zone_append": false, 00:08:40.716 "compare": false, 00:08:40.716 "compare_and_write": false, 00:08:40.716 "abort": false, 00:08:40.716 "seek_hole": true, 00:08:40.716 "seek_data": true, 00:08:40.716 "copy": false, 00:08:40.716 "nvme_iov_md": false 00:08:40.716 }, 00:08:40.716 "driver_specific": { 00:08:40.716 "lvol": { 00:08:40.716 "lvol_store_uuid": "3794efdc-0b58-443a-aa37-008b39172ad6", 00:08:40.716 "base_bdev": "aio_bdev", 00:08:40.716 "thin_provision": false, 00:08:40.716 "num_allocated_clusters": 38, 00:08:40.716 "snapshot": false, 00:08:40.716 "clone": false, 00:08:40.716 "esnap_clone": false 00:08:40.716 } 00:08:40.716 } 00:08:40.716 } 00:08:40.716 ] 00:08:40.716 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:40.716 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:40.716 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.976 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.976 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:40.977 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.977 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.977 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2f199144-c1f2-4fea-ad56-5f4f8a922a8c 00:08:41.237 22:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3794efdc-0b58-443a-aa37-008b39172ad6 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.500 00:08:41.500 real 0m16.915s 00:08:41.500 user 0m44.469s 00:08:41.500 sys 0m2.871s 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.500 ************************************ 00:08:41.500 END TEST lvs_grow_dirty 00:08:41.500 ************************************ 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:41.500 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:41.762 nvmf_trace.0 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.762 rmmod nvme_tcp 00:08:41.762 rmmod nvme_fabrics 00:08:41.762 rmmod nvme_keyring 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 684176 ']' 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 684176 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 684176 ']' 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 684176 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 684176 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 684176' 00:08:41.762 killing process with pid 684176 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 684176 00:08:41.762 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 684176 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.023 22:56:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.939 00:08:43.939 real 0m44.391s 00:08:43.939 user 1m5.750s 00:08:43.939 sys 0m10.870s 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.939 ************************************ 00:08:43.939 END TEST nvmf_lvs_grow 00:08:43.939 ************************************ 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.939 22:57:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.200 ************************************ 00:08:44.200 START TEST nvmf_bdev_io_wait 00:08:44.200 ************************************ 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:44.200 * Looking for test storage... 00:08:44.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.200 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.201 22:57:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:52.438 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:52.438 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.438 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:52.439 Found net devices under 0000:31:00.0: cvl_0_0 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:52.439 Found net devices under 0000:31:00.1: cvl_0_1 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.439 22:57:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:08:52.439 00:08:52.439 --- 10.0.0.2 ping statistics --- 00:08:52.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.439 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.463 ms 00:08:52.439 00:08:52.439 --- 10.0.0.1 ping statistics --- 00:08:52.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.439 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=690055 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 690055 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 690055 ']' 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.439 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 [2024-07-24 22:57:10.122834] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:52.439 [2024-07-24 22:57:10.122928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.439 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.439 [2024-07-24 22:57:10.207589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.700 [2024-07-24 22:57:10.286606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.700 [2024-07-24 22:57:10.286646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.700 [2024-07-24 22:57:10.286654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.700 [2024-07-24 22:57:10.286661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.700 [2024-07-24 22:57:10.286667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.700 [2024-07-24 22:57:10.286794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.700 [2024-07-24 22:57:10.286974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.700 [2024-07-24 22:57:10.287114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.700 [2024-07-24 22:57:10.287115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.271 22:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.271 [2024-07-24 22:57:11.011248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.271 Malloc0 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.271 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 [2024-07-24 22:57:11.078906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=690494 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=690496 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:53.532 { 00:08:53.532 "params": { 00:08:53.532 "name": "Nvme$subsystem", 00:08:53.532 "trtype": "$TEST_TRANSPORT", 00:08:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.532 "adrfam": "ipv4", 00:08:53.532 "trsvcid": "$NVMF_PORT", 00:08:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.532 "hdgst": ${hdgst:-false}, 00:08:53.532 "ddgst": ${ddgst:-false} 00:08:53.532 }, 00:08:53.532 "method": "bdev_nvme_attach_controller" 00:08:53.532 } 00:08:53.532 EOF 00:08:53.532 )") 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=690498 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:53.532 { 00:08:53.532 "params": { 00:08:53.532 "name": "Nvme$subsystem", 00:08:53.532 "trtype": "$TEST_TRANSPORT", 00:08:53.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.532 "adrfam": "ipv4", 00:08:53.532 "trsvcid": "$NVMF_PORT", 00:08:53.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.532 "hdgst": ${hdgst:-false}, 00:08:53.532 "ddgst": ${ddgst:-false} 00:08:53.532 }, 00:08:53.532 "method": "bdev_nvme_attach_controller" 00:08:53.532 } 00:08:53.532 EOF 00:08:53.532 )") 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=690501 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:53.532 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:53.533 { 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme$subsystem", 00:08:53.533 "trtype": "$TEST_TRANSPORT", 00:08:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "$NVMF_PORT", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.533 "hdgst": ${hdgst:-false}, 00:08:53.533 "ddgst": ${ddgst:-false} 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 } 00:08:53.533 EOF 00:08:53.533 )") 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:53.533 { 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme$subsystem", 00:08:53.533 "trtype": "$TEST_TRANSPORT", 00:08:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "$NVMF_PORT", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.533 "hdgst": ${hdgst:-false}, 00:08:53.533 "ddgst": ${ddgst:-false} 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 } 00:08:53.533 EOF 00:08:53.533 )") 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 690494 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme1", 00:08:53.533 "trtype": "tcp", 00:08:53.533 "traddr": "10.0.0.2", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "4420", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.533 "hdgst": false, 00:08:53.533 "ddgst": false 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 }' 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme1", 00:08:53.533 "trtype": "tcp", 00:08:53.533 "traddr": "10.0.0.2", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "4420", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.533 "hdgst": false, 00:08:53.533 "ddgst": false 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 }' 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme1", 00:08:53.533 "trtype": "tcp", 00:08:53.533 "traddr": "10.0.0.2", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "4420", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.533 "hdgst": false, 00:08:53.533 "ddgst": false 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 }' 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:53.533 22:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:53.533 "params": { 00:08:53.533 "name": "Nvme1", 00:08:53.533 "trtype": "tcp", 00:08:53.533 "traddr": "10.0.0.2", 00:08:53.533 "adrfam": "ipv4", 00:08:53.533 "trsvcid": "4420", 00:08:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.533 "hdgst": false, 00:08:53.533 "ddgst": false 00:08:53.533 }, 00:08:53.533 "method": "bdev_nvme_attach_controller" 00:08:53.533 }' 00:08:53.533 [2024-07-24 22:57:11.133114] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:53.533 [2024-07-24 22:57:11.133165] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:53.533 [2024-07-24 22:57:11.133249] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:53.533 [2024-07-24 22:57:11.133294] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:53.533 [2024-07-24 22:57:11.137484] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:53.533 [2024-07-24 22:57:11.137531] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:53.533 [2024-07-24 22:57:11.137719] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:08:53.533 [2024-07-24 22:57:11.137767] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:53.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.533 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.533 [2024-07-24 22:57:11.292592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.794 [2024-07-24 22:57:11.334436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.794 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.794 [2024-07-24 22:57:11.347112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:53.794 [2024-07-24 22:57:11.384546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.794 [2024-07-24 22:57:11.385414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:53.794 [2024-07-24 22:57:11.434630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:53.794 [2024-07-24 22:57:11.446861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.794 [2024-07-24 22:57:11.498354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.794 Running I/O for 1 seconds... 00:08:54.054 Running I/O for 1 seconds... 00:08:54.054 Running I/O for 1 seconds... 00:08:54.054 Running I/O for 1 seconds... 00:08:54.994 00:08:54.994 Latency(us) 00:08:54.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.994 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.994 Nvme1n1 : 1.00 187160.55 731.10 0.00 0.00 680.66 271.36 768.00 00:08:54.994 =================================================================================================================== 00:08:54.994 Total : 187160.55 731.10 0.00 0.00 680.66 271.36 768.00 00:08:54.994 00:08:54.994 Latency(us) 00:08:54.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.994 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.994 Nvme1n1 : 1.01 8798.29 34.37 0.00 0.00 14475.92 6335.15 23920.64 00:08:54.994 =================================================================================================================== 00:08:54.994 Total : 8798.29 34.37 0.00 0.00 14475.92 6335.15 23920.64 00:08:54.994 00:08:54.994 Latency(us) 00:08:54.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.994 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.994 Nvme1n1 : 1.00 18807.04 73.47 0.00 0.00 6785.95 4724.05 17694.72 00:08:54.994 =================================================================================================================== 00:08:54.994 Total : 18807.04 73.47 0.00 0.00 6785.95 4724.05 17694.72 00:08:54.994 00:08:54.994 Latency(us) 00:08:54.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.994 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.994 Nvme1n1 : 1.00 8566.82 33.46 0.00 0.00 14902.90 4532.91 34734.08 00:08:54.994 =================================================================================================================== 00:08:54.994 Total : 8566.82 33.46 0.00 0.00 14902.90 4532.91 34734.08 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 690496 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 690498 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 690501 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.255 rmmod nvme_tcp 00:08:55.255 rmmod nvme_fabrics 00:08:55.255 rmmod nvme_keyring 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 690055 ']' 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 690055 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 690055 ']' 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 690055 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.255 22:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690055 00:08:55.255 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.255 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.255 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690055' 00:08:55.255 killing process with pid 690055 00:08:55.255 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 690055 00:08:55.255 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 690055 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.515 22:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.436 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.697 00:08:57.697 real 0m13.486s 00:08:57.697 user 0m19.311s 00:08:57.697 sys 0m7.400s 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.697 ************************************ 00:08:57.697 END TEST nvmf_bdev_io_wait 00:08:57.697 ************************************ 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.697 ************************************ 00:08:57.697 START TEST nvmf_queue_depth 00:08:57.697 ************************************ 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:57.697 * Looking for test storage... 00:08:57.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.697 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.698 22:57:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:05.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:05.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:05.841 Found net devices under 0000:31:00.0: cvl_0_0 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:05.841 Found net devices under 0000:31:00.1: cvl_0_1 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.841 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:09:05.842 00:09:05.842 --- 10.0.0.2 ping statistics --- 00:09:05.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.842 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:09:05.842 00:09:05.842 --- 10.0.0.1 ping statistics --- 00:09:05.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.842 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=695553 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 695553 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 695553 ']' 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.842 22:57:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.102 [2024-07-24 22:57:23.642408] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:06.102 [2024-07-24 22:57:23.642462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.102 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.103 [2024-07-24 22:57:23.736598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.103 [2024-07-24 22:57:23.831582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.103 [2024-07-24 22:57:23.831636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.103 [2024-07-24 22:57:23.831643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.103 [2024-07-24 22:57:23.831650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.103 [2024-07-24 22:57:23.831656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.103 [2024-07-24 22:57:23.831691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.674 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.674 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:06.674 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.674 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.674 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 [2024-07-24 22:57:24.472432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 Malloc0 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 [2024-07-24 22:57:24.549013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=695892 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 695892 /var/tmp/bdevperf.sock 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 695892 ']' 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:06.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.936 22:57:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 [2024-07-24 22:57:24.605537] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:06.936 [2024-07-24 22:57:24.605592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695892 ] 00:09:06.936 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.936 [2024-07-24 22:57:24.678093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.196 [2024-07-24 22:57:24.747449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.768 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.768 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:07.768 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:07.768 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.768 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.028 NVMe0n1 00:09:08.028 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.028 22:57:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:08.028 Running I/O for 10 seconds... 00:09:18.028 00:09:18.028 Latency(us) 00:09:18.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.028 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:18.028 Verification LBA range: start 0x0 length 0x4000 00:09:18.028 NVMe0n1 : 10.05 11412.06 44.58 0.00 0.00 89455.52 24466.77 71652.69 00:09:18.028 =================================================================================================================== 00:09:18.028 Total : 11412.06 44.58 0.00 0.00 89455.52 24466.77 71652.69 00:09:18.028 0 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 695892 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 695892 ']' 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 695892 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.028 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 695892 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 695892' 00:09:18.289 killing process with pid 695892 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 695892 00:09:18.289 Received shutdown signal, test time was about 10.000000 seconds 00:09:18.289 00:09:18.289 Latency(us) 00:09:18.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.289 =================================================================================================================== 00:09:18.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 695892 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.289 22:57:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.289 rmmod nvme_tcp 00:09:18.289 rmmod nvme_fabrics 00:09:18.289 rmmod nvme_keyring 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 695553 ']' 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 695553 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 695553 ']' 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 695553 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.289 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 695553 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 695553' 00:09:18.550 killing process with pid 695553 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 695553 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 695553 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.550 22:57:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.095 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.095 00:09:21.095 real 0m22.980s 00:09:21.095 user 0m25.981s 00:09:21.095 sys 0m7.165s 00:09:21.095 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.096 ************************************ 00:09:21.096 END TEST nvmf_queue_depth 00:09:21.096 ************************************ 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.096 ************************************ 00:09:21.096 START TEST nvmf_target_multipath 00:09:21.096 ************************************ 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.096 * Looking for test storage... 00:09:21.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.096 22:57:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:29.279 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:29.279 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:29.279 Found net devices under 0000:31:00.0: cvl_0_0 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:29.279 Found net devices under 0000:31:00.1: cvl_0_1 00:09:29.279 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:09:29.280 00:09:29.280 --- 10.0.0.2 ping statistics --- 00:09:29.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.280 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:29.280 00:09:29.280 --- 10.0.0.1 ping statistics --- 00:09:29.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.280 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:29.280 only one NIC for nvmf test 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:29.280 rmmod nvme_tcp 00:09:29.280 rmmod nvme_fabrics 00:09:29.280 rmmod nvme_keyring 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.280 22:57:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.223 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:31.224 00:09:31.224 real 0m10.508s 00:09:31.224 user 0m2.318s 00:09:31.224 sys 0m6.097s 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 ************************************ 00:09:31.224 END TEST nvmf_target_multipath 00:09:31.224 ************************************ 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 ************************************ 00:09:31.224 START TEST nvmf_zcopy 00:09:31.224 ************************************ 00:09:31.224 22:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.486 * Looking for test storage... 00:09:31.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.486 22:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.630 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:39.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:39.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:39.631 Found net devices under 0000:31:00.0: cvl_0_0 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:39.631 Found net devices under 0000:31:00.1: cvl_0_1 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:09:39.631 00:09:39.631 --- 10.0.0.2 ping statistics --- 00:09:39.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.631 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:09:39.631 00:09:39.631 --- 10.0.0.1 ping statistics --- 00:09:39.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.631 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=707600 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 707600 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 707600 ']' 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.631 22:57:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.892 [2024-07-24 22:57:57.447365] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:39.892 [2024-07-24 22:57:57.447414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.892 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.892 [2024-07-24 22:57:57.538560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.892 [2024-07-24 22:57:57.604006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.893 [2024-07-24 22:57:57.604048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.893 [2024-07-24 22:57:57.604056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.893 [2024-07-24 22:57:57.604062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.893 [2024-07-24 22:57:57.604068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.893 [2024-07-24 22:57:57.604087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.464 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.464 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:40.464 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.464 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.464 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.725 [2024-07-24 22:57:58.258950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.725 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.725 [2024-07-24 22:57:58.275225] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 malloc0 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:40.726 { 00:09:40.726 "params": { 00:09:40.726 "name": "Nvme$subsystem", 00:09:40.726 "trtype": "$TEST_TRANSPORT", 00:09:40.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.726 "adrfam": "ipv4", 00:09:40.726 "trsvcid": "$NVMF_PORT", 00:09:40.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.726 "hdgst": ${hdgst:-false}, 00:09:40.726 "ddgst": ${ddgst:-false} 00:09:40.726 }, 00:09:40.726 "method": "bdev_nvme_attach_controller" 00:09:40.726 } 00:09:40.726 EOF 00:09:40.726 )") 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:40.726 22:57:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:40.726 "params": { 00:09:40.726 "name": "Nvme1", 00:09:40.726 "trtype": "tcp", 00:09:40.726 "traddr": "10.0.0.2", 00:09:40.726 "adrfam": "ipv4", 00:09:40.726 "trsvcid": "4420", 00:09:40.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.726 "hdgst": false, 00:09:40.726 "ddgst": false 00:09:40.726 }, 00:09:40.726 "method": "bdev_nvme_attach_controller" 00:09:40.726 }' 00:09:40.726 [2024-07-24 22:57:58.388410] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:40.726 [2024-07-24 22:57:58.388473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707714 ] 00:09:40.726 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.726 [2024-07-24 22:57:58.450463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.986 [2024-07-24 22:57:58.517059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.986 Running I/O for 10 seconds... 00:09:50.986 00:09:50.986 Latency(us) 00:09:50.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:50.986 Verification LBA range: start 0x0 length 0x1000 00:09:50.986 Nvme1n1 : 10.01 9168.64 71.63 0.00 0.00 13910.89 1788.59 30365.01 00:09:50.986 =================================================================================================================== 00:09:50.986 Total : 9168.64 71.63 0.00 0.00 13910.89 1788.59 30365.01 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=709886 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:51.246 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:51.246 { 00:09:51.246 "params": { 00:09:51.246 "name": "Nvme$subsystem", 00:09:51.246 "trtype": "$TEST_TRANSPORT", 00:09:51.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.246 "adrfam": "ipv4", 00:09:51.247 "trsvcid": "$NVMF_PORT", 00:09:51.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.247 "hdgst": ${hdgst:-false}, 00:09:51.247 "ddgst": ${ddgst:-false} 00:09:51.247 }, 00:09:51.247 "method": "bdev_nvme_attach_controller" 00:09:51.247 } 00:09:51.247 EOF 00:09:51.247 )") 00:09:51.247 [2024-07-24 22:58:08.873023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.873056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:51.247 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:51.247 [2024-07-24 22:58:08.881010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.881020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:51.247 22:58:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:51.247 "params": { 00:09:51.247 "name": "Nvme1", 00:09:51.247 "trtype": "tcp", 00:09:51.247 "traddr": "10.0.0.2", 00:09:51.247 "adrfam": "ipv4", 00:09:51.247 "trsvcid": "4420", 00:09:51.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.247 "hdgst": false, 00:09:51.247 "ddgst": false 00:09:51.247 }, 00:09:51.247 "method": "bdev_nvme_attach_controller" 00:09:51.247 }' 00:09:51.247 [2024-07-24 22:58:08.889028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.889037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.897048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.897056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.905068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.905077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.913088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.913096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.916796] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:09:51.247 [2024-07-24 22:58:08.916843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709886 ] 00:09:51.247 [2024-07-24 22:58:08.921110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.921124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.929129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.929137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.937150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.937158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.945171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.945179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.247 [2024-07-24 22:58:08.953192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.953200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.961212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.961221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.969231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.969239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.977253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.977261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.981540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.247 [2024-07-24 22:58:08.985274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.985283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:08.993294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:08.993303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:09.001314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:09.001323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:09.009334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:09.009343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:09.017356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:09.017369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.247 [2024-07-24 22:58:09.025376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.247 [2024-07-24 22:58:09.025385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.033397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.033406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.041416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.041424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.045564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.508 [2024-07-24 22:58:09.049438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.049446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.057461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.057471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.065482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.065495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.073501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.073510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.085531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.085539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.093554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.093562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.101573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.101580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.109593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.109600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.117630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.117645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.125637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.125646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.133656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.133665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.141677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.141686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.149746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.149760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.157770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.157780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.165791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.165800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.173810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.173818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.181842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.181857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 Running I/O for 5 seconds... 00:09:51.508 [2024-07-24 22:58:09.189851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.189859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.202144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.202160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.208387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.208402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.218079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.218094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.227091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.227107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.235428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.235444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.244057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.244073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.252795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.252811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.261410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.261425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.270382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.270397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.279256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.279271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.508 [2024-07-24 22:58:09.288403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.508 [2024-07-24 22:58:09.288419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.296900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.296915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.305550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.305564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.314580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.314596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.322988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.323003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.331747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.331766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.340204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.340218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.349370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.349385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.358552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.358567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.366313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.366328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.374850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.374865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.383381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.383402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.392418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.392433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.401140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.401154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.410174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.410189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.418586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.418601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.427539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.427554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.436180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.436195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.444848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.444863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.453195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.453211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.462474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.462489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.471576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.471590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.479910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.479925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.489173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.489188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.496986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.497001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.505791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.505805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.514614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.514629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.523165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.523180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.532011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.532027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.540624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.540639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.769 [2024-07-24 22:58:09.549210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.769 [2024-07-24 22:58:09.549228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.558145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.558160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.566104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.566120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.574804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.574820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.583422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.583437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.592560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.592575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.600955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.600970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.609899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.609915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.618700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.618715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.627220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.627235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.636131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.636146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.644630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.644645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.653187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.653202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.030 [2024-07-24 22:58:09.661762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.030 [2024-07-24 22:58:09.661777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.670364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.670379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.678581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.678595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.687537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.687552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.696226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.696242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.704816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.704831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.713718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.713737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.722640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.722655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.731771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.731786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.740378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.740393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.749356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.749371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.758392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.758408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.766874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.766889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.775646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.775661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.784500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.784514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.793163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.793178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.802018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.802033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.031 [2024-07-24 22:58:09.810511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.031 [2024-07-24 22:58:09.810525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.819190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.819205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.828062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.828078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.836766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.836780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.845633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.845648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.854476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.854491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.863464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.863478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.872023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.872038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.880660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.880678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.889428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.889443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.898006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.898021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.907206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.907221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.915696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.915711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.924570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.924585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.933537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.933551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.942087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.942102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.950767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.950781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.959225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.959240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.967789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.967804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.976563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.976578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.985465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.985480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:09.994405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:09.994420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.003829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.003846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.012840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.012856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.021246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.021260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.029850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.029865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.038334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.038350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.047304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.047324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.056100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.056116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.065141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.065156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.293 [2024-07-24 22:58:10.074109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.293 [2024-07-24 22:58:10.074125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.083028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.083043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.091414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.091429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.100164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.100179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.108656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.108672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.118219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.118237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.126982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.126998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.136085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.136101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.144870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.144885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.153350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.153366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.162395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.162411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.170883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.170899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.179305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.179321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.187624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.554 [2024-07-24 22:58:10.187640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.554 [2024-07-24 22:58:10.195767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.195782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.203835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.203851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.212262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.212277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.221473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.221489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.229707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.229722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.238539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.238554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.247061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.247077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.256173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.256188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.265064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.265080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.274086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.274101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.282671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.282686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.291343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.291358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.300279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.300294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.308820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.308835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.317774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.317789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.326105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.326120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.555 [2024-07-24 22:58:10.334978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.555 [2024-07-24 22:58:10.334993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.343532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.343548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.351978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.351993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.360770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.360785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.369539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.369554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.378554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.378570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.387466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.387481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.396816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.396831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.405697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.405713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.414102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.414118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.423212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.423226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.432019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.432033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.440526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.440541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.448942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.448957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.458210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.458226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.467084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.467099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.475655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.475671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.484730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.484745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.492882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.492897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.501528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.501542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.510179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.510193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.519286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.519301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.527497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.527512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.535921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.535937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.544233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.544248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.553236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.553251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.561870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.561885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.570292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.570307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.579349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.579365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.587747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.587767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.816 [2024-07-24 22:58:10.596628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.816 [2024-07-24 22:58:10.596643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.605050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.605065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.614234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.614249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.623182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.623197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.631801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.631815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.640788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.640804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.649681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.649696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.658780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.658795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.667967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.667981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.676606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.676621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.685525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.685540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.694277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.694292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.702923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.702942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.711578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.711593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.720419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.720434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.728943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.728958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.737805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.737820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.078 [2024-07-24 22:58:10.746639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.078 [2024-07-24 22:58:10.746653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.755603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.755619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.764708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.764722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.773511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.773526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.782599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.782614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.791233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.791248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.799820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.799834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.808629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.808643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.817271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.817286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.826187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.826201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.834918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.834932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.843777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.843792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.852481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.852495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.079 [2024-07-24 22:58:10.860884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.079 [2024-07-24 22:58:10.860899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.869638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.869658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.878101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.878116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.887119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.887134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.895646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.895661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.904473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.904488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.913641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.913656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.921870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.921885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.931117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.931131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.939224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.939239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.948202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.948217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.957063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.957079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.964778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.964792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.974302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.974317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.983256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.983270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:10.992274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:10.992288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.000985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:11.001000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.009979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:11.009994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.018937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:11.018952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.027975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:11.027989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.036824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.340 [2024-07-24 22:58:11.036841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.340 [2024-07-24 22:58:11.045866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.045881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.054504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.054519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.063255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.063269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.072029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.072044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.080877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.080891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.088675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.088689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.098174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.098189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.106174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.106189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.114629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.114644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.341 [2024-07-24 22:58:11.123078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.341 [2024-07-24 22:58:11.123093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.131702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.131717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.140176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.140190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.148831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.148846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.156898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.156912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.165495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.165510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.174091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.174106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.183141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.183156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.191210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.191225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.200325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.200343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.208394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.208408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.217136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.217151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.225494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.225508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.234468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.234483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.242802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.242817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.251063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.251077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.259668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.259683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.268261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.268276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.277000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.277015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.285423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.285437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.293852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.293867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.302349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.302363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.311071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.601 [2024-07-24 22:58:11.311086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.601 [2024-07-24 22:58:11.319867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.319882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.328544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.328559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.337396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.337410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.346500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.346514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.355548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.355563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.364125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.364143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.372855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.372870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.602 [2024-07-24 22:58:11.381762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.602 [2024-07-24 22:58:11.381777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.389674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.389688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.398467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.398482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.407355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.407369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.415331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.415345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.424362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.424377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.432745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.432764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.441376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.441392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.450505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.450519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.459258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.459273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.468292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.468307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.476856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.476871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.485462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.485477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.493884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.493900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.502733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.502748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.511445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.511460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.519844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.519859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.528703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.528718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.537030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.537044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.545949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.545963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.554836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.554850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.563766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.563780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.572267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.572282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.581050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.581065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.589837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.589852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.598371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.598386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.607124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.607139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.615775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.615790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.625065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.625080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.633249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.633264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.862 [2024-07-24 22:58:11.642095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.862 [2024-07-24 22:58:11.642109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.650204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.650219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.658969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.658984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.667498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.667512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.676139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.676154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.684994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.685009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.692879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.692894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.702179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.702193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.710815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.710829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.719127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.719143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.727687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.727702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.736208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.736223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.744349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.744364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.753298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.753313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.761792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.761807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.770386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.770402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.778840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.778855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.787619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.787634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.796587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.796603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.805568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.805583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.813724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.813739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.822730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.822745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.831201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.831216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.840295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.840310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.849281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.849296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.858309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.858325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.866884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.866899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.875545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.875560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.884634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.884650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.892758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.892773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.901392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.901407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.127 [2024-07-24 22:58:11.910374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.127 [2024-07-24 22:58:11.910388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.919167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.919183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.927557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.927571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.936440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.936456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.945147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.945163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.953633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.953649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.962088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.962103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.971056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.971070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.980076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.980090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.988419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.988435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:11.997055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:11.997070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.005482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.005497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.013780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.013795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.022704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.022720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.030901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.030916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.039807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.039822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.048811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.048827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.057666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.057681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.066722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.066737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.075138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.075152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.083845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.083861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.092769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.092784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.101325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.101339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.109983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.109998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.118276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.118291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.126828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.126844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.134926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.134941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.143926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.143941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.152535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.152551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.161310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.161326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.389 [2024-07-24 22:58:12.170483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.389 [2024-07-24 22:58:12.170497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.178204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.178223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.187180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.187195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.195810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.195825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.204716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.204731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.212942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.212957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.221534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.221549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.230277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.230291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.238616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.238631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.247609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.247624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.255922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.255938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.264536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.264551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.273476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.273491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.282000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.282016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.290596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.290611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.299873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.299889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.308493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.308509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.317250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.317266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.326048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.326064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.334491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.334506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.343129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.343147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.351513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.351529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.360382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.360398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.369707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.650 [2024-07-24 22:58:12.369722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.650 [2024-07-24 22:58:12.378137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.378152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.386962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.386976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.395792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.395807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.404799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.404814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.413843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.413858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.422721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.422735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.651 [2024-07-24 22:58:12.431234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.651 [2024-07-24 22:58:12.431248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.440278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.440293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.449241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.449255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.457541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.457555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.466427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.466441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.475521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.475537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.484432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.911 [2024-07-24 22:58:12.484447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.911 [2024-07-24 22:58:12.492625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.492640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.501519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.501534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.510583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.510602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.518850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.518865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.527551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.527567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.535982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.535997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.544964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.544978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.553947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.553961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.562504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.562519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.571569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.571584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.579680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.579694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.588227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.588241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.596518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.596533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.605538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.605552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.614412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.614426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.622876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.622891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.631299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.631314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.640014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.640029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.649048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.649062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.657476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.657491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.666316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.666330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.675136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.675154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.683810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.683826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.912 [2024-07-24 22:58:12.692783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.912 [2024-07-24 22:58:12.692798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.172 [2024-07-24 22:58:12.701755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.172 [2024-07-24 22:58:12.701770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.172 [2024-07-24 22:58:12.710081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.172 [2024-07-24 22:58:12.710095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.719373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.719387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.728001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.728015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.737208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.737224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.746029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.746044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.754701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.754716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.763352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.763367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.771788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.771803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.788892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.788908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.796881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.796896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.805910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.805925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.814454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.814468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.823625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.823640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.832636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.832651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.840964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.840979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.849544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.849559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.857921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.857935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.866808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.866823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.875517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.875532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.884136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.884151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.893616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.893631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.901538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.901552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.910116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.910131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.918729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.918743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.927088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.927103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.935651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.935666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.944869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.944884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.173 [2024-07-24 22:58:12.953201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.173 [2024-07-24 22:58:12.953216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:12.961880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:12.961895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:12.969986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:12.970001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:12.978583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:12.978598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:12.987347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:12.987361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:12.996117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:12.996132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.005151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.005166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.014084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.014099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.022431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.022445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.031527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.031542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.040380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.040395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.049393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.049407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.057773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.057788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.066613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.066627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.075738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.075755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.083923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.083937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.092904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.092918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.101883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.101898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.110809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.110824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.119988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.120002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.128177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.128191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.137212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.137227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.145558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.145572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.154255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.154270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.162497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.162512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.171173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.171188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.180216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.180230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.188594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.188609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.197077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.197091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.206080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.206094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.433 [2024-07-24 22:58:13.214644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.433 [2024-07-24 22:58:13.214659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.223531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.223546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.231622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.231636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.239983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.239997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.248234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.248249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.256891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.256906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.265564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.265578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.274742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.274761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.283487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.283502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.292175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.292190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.301030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.301045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.309814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.309828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.318660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.318675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.327480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.327495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.335652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.335666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.344047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.344061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.353166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.353180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.362246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.362261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.371151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.371165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.379594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.379609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.388110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.388124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.693 [2024-07-24 22:58:13.396803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.693 [2024-07-24 22:58:13.396818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.405634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.405650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.414493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.414507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.423133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.423148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.432355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.432370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.441318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.441333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.449482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.449497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.458517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.458532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.466947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.466962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.694 [2024-07-24 22:58:13.475866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.694 [2024-07-24 22:58:13.475882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.484550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.484566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.492831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.492846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.501548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.501566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.510315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.510330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.519088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.519103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.527502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.527517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.536476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.536492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.545033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.545048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.553614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.553629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.562227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.562242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.571207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.571223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.580082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.580097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.588375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.588391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.597687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.597702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.605834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.605850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.614609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.614625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.623051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.623066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.631930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.631945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.640939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.640955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.954 [2024-07-24 22:58:13.649844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.954 [2024-07-24 22:58:13.649859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.658098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.658113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.666875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.666893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.675467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.675482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.684390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.684405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.693458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.693473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.702700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.702716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.710852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.710866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.719789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.719804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.728101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.728116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.955 [2024-07-24 22:58:13.737295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.955 [2024-07-24 22:58:13.737310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.746280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.746296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.755285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.755301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.762874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.762889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.771896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.771912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.780855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.780870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.789351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.789366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.797419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.797434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.806457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.806472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.815641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.815656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.823946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.823961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.833047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.833066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.841775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.841790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.850088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.850104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.858861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.858876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.867258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.867273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.875922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.875938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.215 [2024-07-24 22:58:13.884408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.215 [2024-07-24 22:58:13.884423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.892972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.892987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.901623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.901638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.909985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.910001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.918736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.918757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.927718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.927733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.941141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.941157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.954336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.954352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.967124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.967140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.980487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.980504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.216 [2024-07-24 22:58:13.993791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.216 [2024-07-24 22:58:13.993807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.007009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.007025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.019980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.019996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.033491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.033511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.046531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.046548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.058786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.058801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.071991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.072007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.085440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.085456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.098014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.098029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.111156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.111171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.123701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.123716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.136798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.136813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.150125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.477 [2024-07-24 22:58:14.150140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.477 [2024-07-24 22:58:14.163438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.163454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.175900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.175915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.188880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.188895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.201706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.201721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 00:09:56.478 Latency(us) 00:09:56.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.478 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:56.478 Nvme1n1 : 5.01 19434.51 151.83 0.00 0.00 6579.83 2457.60 19660.80 00:09:56.478 =================================================================================================================== 00:09:56.478 Total : 19434.51 151.83 0.00 0.00 6579.83 2457.60 19660.80 00:09:56.478 [2024-07-24 22:58:14.211299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.211313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.223331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.223342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.235364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.235375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.247392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.247403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.478 [2024-07-24 22:58:14.259421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.478 [2024-07-24 22:58:14.259431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.738 [2024-07-24 22:58:14.271449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.738 [2024-07-24 22:58:14.271459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 [2024-07-24 22:58:14.283480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.739 [2024-07-24 22:58:14.283487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 [2024-07-24 22:58:14.295514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.739 [2024-07-24 22:58:14.295524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 [2024-07-24 22:58:14.307543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.739 [2024-07-24 22:58:14.307552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 [2024-07-24 22:58:14.319576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.739 [2024-07-24 22:58:14.319588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 [2024-07-24 22:58:14.331605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.739 [2024-07-24 22:58:14.331614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (709886) - No such process 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 709886 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.739 delay0 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.739 22:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:56.739 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.739 [2024-07-24 22:58:14.472092] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:03.324 Initializing NVMe Controllers 00:10:03.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:03.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:03.324 Initialization complete. Launching workers. 00:10:03.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 559 00:10:03.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 843, failed to submit 36 00:10:03.324 success 688, unsuccess 155, failed 0 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.324 rmmod nvme_tcp 00:10:03.324 rmmod nvme_fabrics 00:10:03.324 rmmod nvme_keyring 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 707600 ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 707600 ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 707600' 00:10:03.324 killing process with pid 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 707600 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.324 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.325 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.325 22:58:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.281 00:10:05.281 real 0m33.959s 00:10:05.281 user 0m44.490s 00:10:05.281 sys 0m10.361s 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.281 ************************************ 00:10:05.281 END TEST nvmf_zcopy 00:10:05.281 ************************************ 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.281 ************************************ 00:10:05.281 START TEST nvmf_nmic 00:10:05.281 ************************************ 00:10:05.281 22:58:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:05.544 * Looking for test storage... 00:10:05.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.544 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.545 22:58:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:13.689 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.689 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:13.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:13.690 Found net devices under 0000:31:00.0: cvl_0_0 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:13.690 Found net devices under 0000:31:00.1: cvl_0_1 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.690 22:58:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:10:13.690 00:10:13.690 --- 10.0.0.2 ping statistics --- 00:10:13.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.690 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:10:13.690 00:10:13.690 --- 10.0.0.1 ping statistics --- 00:10:13.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.690 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=716993 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 716993 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 716993 ']' 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.690 22:58:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.690 [2024-07-24 22:58:31.423394] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:13.690 [2024-07-24 22:58:31.423460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.690 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.951 [2024-07-24 22:58:31.502427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.951 [2024-07-24 22:58:31.578171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.951 [2024-07-24 22:58:31.578212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.951 [2024-07-24 22:58:31.578219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.951 [2024-07-24 22:58:31.578226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.951 [2024-07-24 22:58:31.578232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.951 [2024-07-24 22:58:31.578367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.951 [2024-07-24 22:58:31.578485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.951 [2024-07-24 22:58:31.578640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.951 [2024-07-24 22:58:31.578642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.522 [2024-07-24 22:58:32.252742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.522 Malloc0 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.522 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.783 [2024-07-24 22:58:32.309631] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:14.783 test case1: single bdev can't be used in multiple subsystems 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:14.783 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.784 [2024-07-24 22:58:32.345555] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:14.784 [2024-07-24 22:58:32.345576] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:14.784 [2024-07-24 22:58:32.345584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.784 request: 00:10:14.784 { 00:10:14.784 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.784 "namespace": { 00:10:14.784 "bdev_name": "Malloc0", 00:10:14.784 "no_auto_visible": false 00:10:14.784 }, 00:10:14.784 "method": "nvmf_subsystem_add_ns", 00:10:14.784 "req_id": 1 00:10:14.784 } 00:10:14.784 Got JSON-RPC error response 00:10:14.784 response: 00:10:14.784 { 00:10:14.784 "code": -32602, 00:10:14.784 "message": "Invalid parameters" 00:10:14.784 } 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:14.784 Adding namespace failed - expected result. 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:14.784 test case2: host connect to nvmf target in multiple paths 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.784 [2024-07-24 22:58:32.357686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.784 22:58:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.169 22:58:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:18.082 22:58:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.082 22:58:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.082 22:58:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.082 22:58:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.082 22:58:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:19.994 22:58:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:19.994 [global] 00:10:19.994 thread=1 00:10:19.994 invalidate=1 00:10:19.994 rw=write 00:10:19.994 time_based=1 00:10:19.994 runtime=1 00:10:19.994 ioengine=libaio 00:10:19.994 direct=1 00:10:19.994 bs=4096 00:10:19.994 iodepth=1 00:10:19.994 norandommap=0 00:10:19.994 numjobs=1 00:10:19.994 00:10:19.994 verify_dump=1 00:10:19.994 verify_backlog=512 00:10:19.994 verify_state_save=0 00:10:19.994 do_verify=1 00:10:19.994 verify=crc32c-intel 00:10:19.994 [job0] 00:10:19.994 filename=/dev/nvme0n1 00:10:19.994 Could not set queue depth (nvme0n1) 00:10:19.994 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.994 fio-3.35 00:10:19.994 Starting 1 thread 00:10:21.378 00:10:21.378 job0: (groupid=0, jobs=1): err= 0: pid=718383: Wed Jul 24 22:58:38 2024 00:10:21.378 read: IOPS=13, BW=55.8KiB/s (57.2kB/s)(56.0KiB/1003msec) 00:10:21.378 slat (nsec): min=24282, max=24855, avg=24631.29, stdev=138.07 00:10:21.378 clat (usec): min=41563, max=43012, avg=42076.16, stdev=394.30 00:10:21.378 lat (usec): min=41587, max=43037, avg=42100.80, stdev=394.32 00:10:21.378 clat percentiles (usec): 00:10:21.378 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:21.378 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:21.378 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:10:21.378 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:21.378 | 99.99th=[43254] 00:10:21.378 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:21.378 slat (usec): min=9, max=27245, avg=82.49, stdev=1202.81 00:10:21.378 clat (usec): min=390, max=994, avg=717.61, stdev=92.50 00:10:21.378 lat (usec): min=401, max=28049, avg=800.10, stdev=1210.48 00:10:21.378 clat percentiles (usec): 00:10:21.378 | 1.00th=[ 474], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 644], 00:10:21.378 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 750], 00:10:21.378 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:10:21.378 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 996], 99.95th=[ 996], 00:10:21.378 | 99.99th=[ 996] 00:10:21.378 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.378 lat (usec) : 500=1.90%, 750=57.03%, 1000=38.40% 00:10:21.378 lat (msec) : 50=2.66% 00:10:21.378 cpu : usr=0.70%, sys=1.50%, ctx=530, majf=0, minf=1 00:10:21.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.378 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.378 00:10:21.379 Run status group 0 (all jobs): 00:10:21.379 READ: bw=55.8KiB/s (57.2kB/s), 55.8KiB/s-55.8KiB/s (57.2kB/s-57.2kB/s), io=56.0KiB (57.3kB), run=1003-1003msec 00:10:21.379 WRITE: bw=2042KiB/s (2091kB/s), 2042KiB/s-2042KiB/s (2091kB/s-2091kB/s), io=2048KiB (2097kB), run=1003-1003msec 00:10:21.379 00:10:21.379 Disk stats (read/write): 00:10:21.379 nvme0n1: ios=36/512, merge=0/0, ticks=1432/349, in_queue=1781, util=99.00% 00:10:21.379 22:58:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.379 rmmod nvme_tcp 00:10:21.379 rmmod nvme_fabrics 00:10:21.379 rmmod nvme_keyring 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 716993 ']' 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 716993 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 716993 ']' 00:10:21.379 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 716993 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 716993 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 716993' 00:10:21.639 killing process with pid 716993 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 716993 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 716993 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.639 22:58:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:24.185 00:10:24.185 real 0m18.448s 00:10:24.185 user 0m48.493s 00:10:24.185 sys 0m6.783s 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 ************************************ 00:10:24.185 END TEST nvmf_nmic 00:10:24.185 ************************************ 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 ************************************ 00:10:24.185 START TEST nvmf_fio_target 00:10:24.185 ************************************ 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:24.185 * Looking for test storage... 00:10:24.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:24.185 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:24.186 22:58:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:32.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:32.329 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:32.329 Found net devices under 0000:31:00.0: cvl_0_0 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:32.329 Found net devices under 0000:31:00.1: cvl_0_1 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:10:32.329 00:10:32.329 --- 10.0.0.2 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:10:32.329 00:10:32.329 --- 10.0.0.1 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:32.329 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.330 22:58:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=723444 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 723444 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 723444 ']' 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.330 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.330 [2024-07-24 22:58:50.066363] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:10:32.330 [2024-07-24 22:58:50.066434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.330 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.590 [2024-07-24 22:58:50.147618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.590 [2024-07-24 22:58:50.223721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.590 [2024-07-24 22:58:50.223766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.590 [2024-07-24 22:58:50.223775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.590 [2024-07-24 22:58:50.223782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.590 [2024-07-24 22:58:50.223787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.590 [2024-07-24 22:58:50.223854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.590 [2024-07-24 22:58:50.223965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.590 [2024-07-24 22:58:50.224108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.590 [2024-07-24 22:58:50.224109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.161 22:58:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.422 [2024-07-24 22:58:51.031071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.422 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.683 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:33.683 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.683 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:33.683 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.943 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:33.943 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.204 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:34.204 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:34.204 22:58:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.464 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:34.464 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.725 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:34.725 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.725 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:34.725 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:34.985 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.246 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.246 22:58:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.246 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.246 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.506 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.768 [2024-07-24 22:58:53.321832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.768 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:35.768 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:36.028 22:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:37.413 22:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:39.390 22:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.651 [global] 00:10:39.651 thread=1 00:10:39.651 invalidate=1 00:10:39.651 rw=write 00:10:39.651 time_based=1 00:10:39.651 runtime=1 00:10:39.651 ioengine=libaio 00:10:39.651 direct=1 00:10:39.651 bs=4096 00:10:39.651 iodepth=1 00:10:39.651 norandommap=0 00:10:39.651 numjobs=1 00:10:39.651 00:10:39.651 verify_dump=1 00:10:39.651 verify_backlog=512 00:10:39.651 verify_state_save=0 00:10:39.651 do_verify=1 00:10:39.651 verify=crc32c-intel 00:10:39.651 [job0] 00:10:39.651 filename=/dev/nvme0n1 00:10:39.651 [job1] 00:10:39.651 filename=/dev/nvme0n2 00:10:39.651 [job2] 00:10:39.651 filename=/dev/nvme0n3 00:10:39.651 [job3] 00:10:39.651 filename=/dev/nvme0n4 00:10:39.651 Could not set queue depth (nvme0n1) 00:10:39.651 Could not set queue depth (nvme0n2) 00:10:39.651 Could not set queue depth (nvme0n3) 00:10:39.651 Could not set queue depth (nvme0n4) 00:10:39.912 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.912 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.912 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.912 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.912 fio-3.35 00:10:39.912 Starting 4 threads 00:10:41.299 00:10:41.299 job0: (groupid=0, jobs=1): err= 0: pid=725162: Wed Jul 24 22:58:58 2024 00:10:41.299 read: IOPS=133, BW=535KiB/s (548kB/s)(540KiB/1009msec) 00:10:41.299 slat (nsec): min=24152, max=42835, avg=25321.81, stdev=2900.09 00:10:41.299 clat (usec): min=606, max=42224, avg=4913.47, stdev=12147.20 00:10:41.299 lat (usec): min=630, max=42251, avg=4938.80, stdev=12147.52 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 619], 5.00th=[ 701], 10.00th=[ 816], 20.00th=[ 938], 00:10:41.299 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:10:41.299 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1139], 95.00th=[42206], 00:10:41.299 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:41.299 | 99.99th=[42206] 00:10:41.299 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:41.299 slat (usec): min=9, max=11043, avg=49.72, stdev=486.89 00:10:41.299 clat (usec): min=242, max=820, avg=608.63, stdev=110.40 00:10:41.299 lat (usec): min=253, max=11795, avg=658.35, stdev=506.45 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 343], 5.00th=[ 379], 10.00th=[ 465], 20.00th=[ 519], 00:10:41.299 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 644], 00:10:41.299 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 758], 00:10:41.299 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 824], 99.95th=[ 824], 00:10:41.299 | 99.99th=[ 824] 00:10:41.299 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.299 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.299 lat (usec) : 250=0.15%, 500=12.98%, 750=61.82%, 1000=14.84% 00:10:41.299 lat (msec) : 2=8.19%, 50=2.01% 00:10:41.299 cpu : usr=0.89%, sys=1.79%, ctx=649, majf=0, minf=1 00:10:41.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 issued rwts: total=135,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.299 job1: (groupid=0, jobs=1): err= 0: pid=725163: Wed Jul 24 22:58:58 2024 00:10:41.299 read: IOPS=58, BW=233KiB/s (239kB/s)(236KiB/1013msec) 00:10:41.299 slat (nsec): min=23885, max=42829, avg=25685.81, stdev=4146.49 00:10:41.299 clat (usec): min=1095, max=43086, avg=10267.00, stdev=17131.88 00:10:41.299 lat (usec): min=1120, max=43110, avg=10292.68, stdev=17131.18 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1188], 20.00th=[ 1221], 00:10:41.299 | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[ 1254], 60.00th=[ 1270], 00:10:41.299 | 70.00th=[ 1303], 80.00th=[41681], 90.00th=[42206], 95.00th=[43254], 00:10:41.299 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:41.299 | 99.99th=[43254] 00:10:41.299 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:41.299 slat (nsec): min=9407, max=50091, avg=28171.45, stdev=9072.00 00:10:41.299 clat (usec): min=365, max=963, avg=755.90, stdev=98.98 00:10:41.299 lat (usec): min=377, max=992, avg=784.07, stdev=103.70 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 482], 5.00th=[ 578], 10.00th=[ 611], 20.00th=[ 685], 00:10:41.299 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 791], 00:10:41.299 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:10:41.299 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 963], 99.95th=[ 963], 00:10:41.299 | 99.99th=[ 963] 00:10:41.299 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.299 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.299 lat (usec) : 500=1.58%, 750=36.60%, 1000=51.49% 00:10:41.299 lat (msec) : 2=8.06%, 50=2.28% 00:10:41.299 cpu : usr=0.79%, sys=1.48%, ctx=571, majf=0, minf=1 00:10:41.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 issued rwts: total=59,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.299 job2: (groupid=0, jobs=1): err= 0: pid=725164: Wed Jul 24 22:58:58 2024 00:10:41.299 read: IOPS=492, BW=1970KiB/s (2017kB/s)(1972KiB/1001msec) 00:10:41.299 slat (nsec): min=7864, max=56863, avg=25097.01, stdev=2132.48 00:10:41.299 clat (usec): min=796, max=1346, avg=1151.96, stdev=74.43 00:10:41.299 lat (usec): min=821, max=1370, avg=1177.06, stdev=74.28 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 914], 5.00th=[ 1012], 10.00th=[ 1074], 20.00th=[ 1106], 00:10:41.299 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:10:41.299 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1254], 00:10:41.299 | 99.00th=[ 1319], 99.50th=[ 1319], 99.90th=[ 1352], 99.95th=[ 1352], 00:10:41.299 | 99.99th=[ 1352] 00:10:41.299 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:41.299 slat (nsec): min=9731, max=65904, avg=30214.35, stdev=8667.84 00:10:41.299 clat (usec): min=483, max=1007, avg=773.96, stdev=99.95 00:10:41.299 lat (usec): min=501, max=1051, avg=804.18, stdev=103.39 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 537], 5.00th=[ 594], 10.00th=[ 635], 20.00th=[ 685], 00:10:41.299 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 807], 00:10:41.299 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 922], 00:10:41.299 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:41.299 | 99.99th=[ 1004] 00:10:41.299 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.299 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.299 lat (usec) : 500=0.20%, 750=19.40%, 1000=33.13% 00:10:41.299 lat (msec) : 2=47.26% 00:10:41.299 cpu : usr=1.90%, sys=2.50%, ctx=1005, majf=0, minf=1 00:10:41.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 issued rwts: total=493,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.299 job3: (groupid=0, jobs=1): err= 0: pid=725165: Wed Jul 24 22:58:58 2024 00:10:41.299 read: IOPS=18, BW=73.7KiB/s (75.5kB/s)(76.0KiB/1031msec) 00:10:41.299 slat (nsec): min=25124, max=26770, avg=25605.16, stdev=414.92 00:10:41.299 clat (usec): min=785, max=41758, avg=38884.90, stdev=9228.18 00:10:41.299 lat (usec): min=811, max=41784, avg=38910.50, stdev=9228.16 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 783], 5.00th=[ 783], 10.00th=[40633], 20.00th=[41157], 00:10:41.299 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:41.299 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:41.299 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:41.299 | 99.99th=[41681] 00:10:41.299 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:41.299 slat (nsec): min=8836, max=51079, avg=30335.20, stdev=8288.74 00:10:41.299 clat (usec): min=181, max=765, avg=533.05, stdev=116.69 00:10:41.299 lat (usec): min=191, max=797, avg=563.39, stdev=120.17 00:10:41.299 clat percentiles (usec): 00:10:41.299 | 1.00th=[ 215], 5.00th=[ 314], 10.00th=[ 367], 20.00th=[ 437], 00:10:41.299 | 30.00th=[ 490], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 570], 00:10:41.299 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:10:41.299 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 766], 99.95th=[ 766], 00:10:41.299 | 99.99th=[ 766] 00:10:41.299 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.299 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.299 lat (usec) : 250=1.69%, 500=31.07%, 750=62.90%, 1000=0.94% 00:10:41.299 lat (msec) : 50=3.39% 00:10:41.299 cpu : usr=1.26%, sys=1.75%, ctx=531, majf=0, minf=1 00:10:41.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.299 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.299 00:10:41.299 Run status group 0 (all jobs): 00:10:41.299 READ: bw=2739KiB/s (2805kB/s), 73.7KiB/s-1970KiB/s (75.5kB/s-2017kB/s), io=2824KiB (2892kB), run=1001-1031msec 00:10:41.299 WRITE: bw=7946KiB/s (8136kB/s), 1986KiB/s-2046KiB/s (2034kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1031msec 00:10:41.299 00:10:41.299 Disk stats (read/write): 00:10:41.299 nvme0n1: ios=151/512, merge=0/0, ticks=1255/304, in_queue=1559, util=86.97% 00:10:41.299 nvme0n2: ios=103/512, merge=0/0, ticks=448/368, in_queue=816, util=86.44% 00:10:41.299 nvme0n3: ios=370/512, merge=0/0, ticks=451/361, in_queue=812, util=91.63% 00:10:41.299 nvme0n4: ios=70/512, merge=0/0, ticks=632/193, in_queue=825, util=95.95% 00:10:41.299 22:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:41.299 [global] 00:10:41.299 thread=1 00:10:41.299 invalidate=1 00:10:41.299 rw=randwrite 00:10:41.299 time_based=1 00:10:41.299 runtime=1 00:10:41.299 ioengine=libaio 00:10:41.300 direct=1 00:10:41.300 bs=4096 00:10:41.300 iodepth=1 00:10:41.300 norandommap=0 00:10:41.300 numjobs=1 00:10:41.300 00:10:41.300 verify_dump=1 00:10:41.300 verify_backlog=512 00:10:41.300 verify_state_save=0 00:10:41.300 do_verify=1 00:10:41.300 verify=crc32c-intel 00:10:41.300 [job0] 00:10:41.300 filename=/dev/nvme0n1 00:10:41.300 [job1] 00:10:41.300 filename=/dev/nvme0n2 00:10:41.300 [job2] 00:10:41.300 filename=/dev/nvme0n3 00:10:41.300 [job3] 00:10:41.300 filename=/dev/nvme0n4 00:10:41.300 Could not set queue depth (nvme0n1) 00:10:41.300 Could not set queue depth (nvme0n2) 00:10:41.300 Could not set queue depth (nvme0n3) 00:10:41.300 Could not set queue depth (nvme0n4) 00:10:41.560 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.560 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.560 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.560 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.560 fio-3.35 00:10:41.560 Starting 4 threads 00:10:42.944 00:10:42.944 job0: (groupid=0, jobs=1): err= 0: pid=725681: Wed Jul 24 22:59:00 2024 00:10:42.944 read: IOPS=313, BW=1255KiB/s (1285kB/s)(1256KiB/1001msec) 00:10:42.944 slat (nsec): min=9427, max=90145, avg=26038.06, stdev=4959.65 00:10:42.944 clat (usec): min=977, max=42214, avg=1864.09, stdev=4834.47 00:10:42.944 lat (usec): min=1003, max=42240, avg=1890.13, stdev=4834.43 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 1020], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1205], 00:10:42.944 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:10:42.944 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1369], 00:10:42.944 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:42.944 | 99.99th=[42206] 00:10:42.944 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:42.944 slat (nsec): min=9740, max=63698, avg=29304.48, stdev=9259.59 00:10:42.944 clat (usec): min=375, max=996, avg=752.18, stdev=95.39 00:10:42.944 lat (usec): min=386, max=1010, avg=781.48, stdev=99.33 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 486], 5.00th=[ 578], 10.00th=[ 619], 20.00th=[ 685], 00:10:42.944 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 791], 00:10:42.944 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 881], 00:10:42.944 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 996], 00:10:42.944 | 99.99th=[ 996] 00:10:42.944 bw ( KiB/s): min= 4096, max= 4096, per=51.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.944 lat (usec) : 500=1.09%, 750=28.21%, 1000=32.93% 00:10:42.944 lat (msec) : 2=37.05%, 10=0.12%, 50=0.61% 00:10:42.944 cpu : usr=1.40%, sys=2.20%, ctx=828, majf=0, minf=1 00:10:42.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.944 issued rwts: total=314,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.944 job1: (groupid=0, jobs=1): err= 0: pid=725682: Wed Jul 24 22:59:00 2024 00:10:42.944 read: IOPS=23, BW=95.3KiB/s (97.6kB/s)(96.0KiB/1007msec) 00:10:42.944 slat (nsec): min=9627, max=40716, avg=26025.21, stdev=4701.65 00:10:42.944 clat (usec): min=985, max=43021, avg=35133.86, stdev=15496.71 00:10:42.944 lat (usec): min=1011, max=43051, avg=35159.89, stdev=15495.03 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 988], 5.00th=[ 1004], 10.00th=[ 1254], 20.00th=[41157], 00:10:42.944 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:42.944 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:42.944 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:42.944 | 99.99th=[43254] 00:10:42.944 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:42.944 slat (nsec): min=8323, max=48944, avg=23689.27, stdev=11088.15 00:10:42.944 clat (usec): min=137, max=622, avg=288.90, stdev=110.81 00:10:42.944 lat (usec): min=147, max=652, avg=312.59, stdev=116.76 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 174], 00:10:42.944 | 30.00th=[ 239], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:10:42.944 | 70.00th=[ 314], 80.00th=[ 388], 90.00th=[ 445], 95.00th=[ 502], 00:10:42.944 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 619], 00:10:42.944 | 99.99th=[ 619] 00:10:42.944 bw ( KiB/s): min= 4096, max= 4096, per=51.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.944 lat (usec) : 250=32.09%, 500=58.40%, 750=5.04%, 1000=0.19% 00:10:42.944 lat (msec) : 2=0.56%, 50=3.73% 00:10:42.944 cpu : usr=1.29%, sys=1.29%, ctx=536, majf=0, minf=1 00:10:42.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.944 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.944 job2: (groupid=0, jobs=1): err= 0: pid=725683: Wed Jul 24 22:59:00 2024 00:10:42.944 read: IOPS=496, BW=1986KiB/s (2034kB/s)(1988KiB/1001msec) 00:10:42.944 slat (nsec): min=7759, max=61625, avg=25960.06, stdev=3683.50 00:10:42.944 clat (usec): min=797, max=1496, avg=1193.72, stdev=75.53 00:10:42.944 lat (usec): min=817, max=1522, avg=1219.68, stdev=75.94 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 971], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1139], 00:10:42.944 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:10:42.944 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1270], 95.00th=[ 1303], 00:10:42.944 | 99.00th=[ 1352], 99.50th=[ 1352], 99.90th=[ 1500], 99.95th=[ 1500], 00:10:42.944 | 99.99th=[ 1500] 00:10:42.944 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:42.944 slat (nsec): min=8716, max=49793, avg=28466.53, stdev=9105.42 00:10:42.944 clat (usec): min=325, max=963, avg=725.94, stdev=98.65 00:10:42.944 lat (usec): min=335, max=995, avg=754.41, stdev=102.34 00:10:42.944 clat percentiles (usec): 00:10:42.944 | 1.00th=[ 474], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 660], 00:10:42.944 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 750], 00:10:42.944 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 881], 00:10:42.944 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:10:42.944 | 99.99th=[ 963] 00:10:42.944 bw ( KiB/s): min= 4096, max= 4096, per=51.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.944 lat (usec) : 500=1.09%, 750=29.04%, 1000=21.41% 00:10:42.944 lat (msec) : 2=48.46% 00:10:42.944 cpu : usr=1.50%, sys=4.40%, ctx=1009, majf=0, minf=1 00:10:42.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.945 issued rwts: total=497,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.945 job3: (groupid=0, jobs=1): err= 0: pid=725684: Wed Jul 24 22:59:00 2024 00:10:42.945 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1020msec) 00:10:42.945 slat (nsec): min=25737, max=29262, avg=26627.12, stdev=830.68 00:10:42.945 clat (usec): min=41050, max=43058, avg=42173.94, stdev=529.62 00:10:42.945 lat (usec): min=41077, max=43084, avg=42200.57, stdev=529.51 00:10:42.945 clat percentiles (usec): 00:10:42.945 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:42.945 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:42.945 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:10:42.945 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:42.945 | 99.99th=[43254] 00:10:42.945 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:42.945 slat (nsec): min=9077, max=54987, avg=29948.67, stdev=10601.88 00:10:42.945 clat (usec): min=323, max=1873, avg=635.16, stdev=140.68 00:10:42.945 lat (usec): min=360, max=1915, avg=665.10, stdev=143.11 00:10:42.945 clat percentiles (usec): 00:10:42.945 | 1.00th=[ 379], 5.00th=[ 437], 10.00th=[ 474], 20.00th=[ 519], 00:10:42.945 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 668], 00:10:42.945 | 70.00th=[ 717], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 848], 00:10:42.945 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 1876], 99.95th=[ 1876], 00:10:42.945 | 99.99th=[ 1876] 00:10:42.945 bw ( KiB/s): min= 4096, max= 4096, per=51.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.945 lat (usec) : 500=15.72%, 750=58.52%, 1000=22.54% 00:10:42.945 lat (msec) : 2=0.19%, 50=3.03% 00:10:42.945 cpu : usr=0.98%, sys=1.96%, ctx=530, majf=0, minf=1 00:10:42.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.945 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.945 00:10:42.945 Run status group 0 (all jobs): 00:10:42.945 READ: bw=3337KiB/s (3417kB/s), 62.7KiB/s-1986KiB/s (64.2kB/s-2034kB/s), io=3404KiB (3486kB), run=1001-1020msec 00:10:42.945 WRITE: bw=8031KiB/s (8224kB/s), 2008KiB/s-2046KiB/s (2056kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1020msec 00:10:42.945 00:10:42.945 Disk stats (read/write): 00:10:42.945 nvme0n1: ios=229/512, merge=0/0, ticks=550/354, in_queue=904, util=87.68% 00:10:42.945 nvme0n2: ios=23/512, merge=0/0, ticks=801/103, in_queue=904, util=85.83% 00:10:42.945 nvme0n3: ios=421/512, merge=0/0, ticks=480/312, in_queue=792, util=93.15% 00:10:42.945 nvme0n4: ios=69/512, merge=0/0, ticks=1059/281, in_queue=1340, util=97.23% 00:10:42.945 22:59:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:42.945 [global] 00:10:42.945 thread=1 00:10:42.945 invalidate=1 00:10:42.945 rw=write 00:10:42.945 time_based=1 00:10:42.945 runtime=1 00:10:42.945 ioengine=libaio 00:10:42.945 direct=1 00:10:42.945 bs=4096 00:10:42.945 iodepth=128 00:10:42.945 norandommap=0 00:10:42.945 numjobs=1 00:10:42.945 00:10:42.945 verify_dump=1 00:10:42.945 verify_backlog=512 00:10:42.945 verify_state_save=0 00:10:42.945 do_verify=1 00:10:42.945 verify=crc32c-intel 00:10:42.945 [job0] 00:10:42.945 filename=/dev/nvme0n1 00:10:42.945 [job1] 00:10:42.945 filename=/dev/nvme0n2 00:10:42.945 [job2] 00:10:42.945 filename=/dev/nvme0n3 00:10:42.945 [job3] 00:10:42.945 filename=/dev/nvme0n4 00:10:42.945 Could not set queue depth (nvme0n1) 00:10:42.945 Could not set queue depth (nvme0n2) 00:10:42.945 Could not set queue depth (nvme0n3) 00:10:42.945 Could not set queue depth (nvme0n4) 00:10:43.206 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.206 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.206 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.206 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.206 fio-3.35 00:10:43.206 Starting 4 threads 00:10:44.590 00:10:44.590 job0: (groupid=0, jobs=1): err= 0: pid=726211: Wed Jul 24 22:59:02 2024 00:10:44.590 read: IOPS=9679, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1005msec) 00:10:44.590 slat (nsec): min=853, max=3705.5k, avg=51002.71, stdev=312056.57 00:10:44.590 clat (usec): min=3893, max=11091, avg=6579.73, stdev=883.36 00:10:44.590 lat (usec): min=3898, max=12680, avg=6630.73, stdev=923.40 00:10:44.590 clat percentiles (usec): 00:10:44.590 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 5932], 00:10:44.590 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6587], 00:10:44.590 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 7701], 95.00th=[ 7963], 00:10:44.590 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[10945], 00:10:44.590 | 99.99th=[11076] 00:10:44.590 write: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(39.6MiB/1005msec); 0 zone resets 00:10:44.590 slat (nsec): min=1498, max=3953.0k, avg=46423.35, stdev=243083.80 00:10:44.590 clat (usec): min=2920, max=12104, avg=6247.31, stdev=1060.09 00:10:44.591 lat (usec): min=2924, max=12106, avg=6293.73, stdev=1072.86 00:10:44.591 clat percentiles (usec): 00:10:44.591 | 1.00th=[ 4047], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5473], 00:10:44.591 | 30.00th=[ 5669], 40.00th=[ 5866], 50.00th=[ 6063], 60.00th=[ 6325], 00:10:44.591 | 70.00th=[ 6652], 80.00th=[ 7046], 90.00th=[ 7701], 95.00th=[ 8029], 00:10:44.591 | 99.00th=[ 9372], 99.50th=[ 9896], 99.90th=[12125], 99.95th=[12125], 00:10:44.591 | 99.99th=[12125] 00:10:44.591 bw ( KiB/s): min=39072, max=40960, per=49.63%, avg=40016.00, stdev=1335.02, samples=2 00:10:44.591 iops : min= 9768, max=10240, avg=10004.00, stdev=333.75, samples=2 00:10:44.591 lat (msec) : 4=0.34%, 10=99.35%, 20=0.31% 00:10:44.591 cpu : usr=4.88%, sys=7.07%, ctx=1057, majf=0, minf=1 00:10:44.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:44.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.591 issued rwts: total=9728,10131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.591 job1: (groupid=0, jobs=1): err= 0: pid=726212: Wed Jul 24 22:59:02 2024 00:10:44.591 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:10:44.591 slat (nsec): min=902, max=16062k, avg=82241.28, stdev=713924.23 00:10:44.591 clat (usec): min=1368, max=77219, avg=11894.72, stdev=9860.10 00:10:44.591 lat (usec): min=1370, max=77222, avg=11976.96, stdev=9949.15 00:10:44.591 clat percentiles (usec): 00:10:44.591 | 1.00th=[ 1680], 5.00th=[ 2245], 10.00th=[ 3163], 20.00th=[ 5342], 00:10:44.591 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 9110], 60.00th=[11600], 00:10:44.591 | 70.00th=[13698], 80.00th=[17957], 90.00th=[22152], 95.00th=[26084], 00:10:44.591 | 99.00th=[63701], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:10:44.591 | 99.99th=[77071] 00:10:44.591 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:44.591 slat (nsec): min=1583, max=13672k, avg=91739.47, stdev=593437.65 00:10:44.591 clat (usec): min=809, max=77225, avg=13823.39, stdev=12241.55 00:10:44.591 lat (usec): min=811, max=77232, avg=13915.13, stdev=12308.88 00:10:44.591 clat percentiles (usec): 00:10:44.591 | 1.00th=[ 1172], 5.00th=[ 3064], 10.00th=[ 4047], 20.00th=[ 5538], 00:10:44.591 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 8225], 60.00th=[10945], 00:10:44.591 | 70.00th=[14091], 80.00th=[22414], 90.00th=[32900], 95.00th=[38536], 00:10:44.591 | 99.00th=[55313], 99.50th=[62129], 99.90th=[63177], 99.95th=[77071], 00:10:44.591 | 99.99th=[77071] 00:10:44.591 bw ( KiB/s): min=12288, max=28664, per=25.40%, avg=20476.00, stdev=11579.58, samples=2 00:10:44.591 iops : min= 3072, max= 7166, avg=5119.00, stdev=2894.90, samples=2 00:10:44.591 lat (usec) : 1000=0.25% 00:10:44.591 lat (msec) : 2=3.11%, 4=7.85%, 10=43.88%, 20=26.64%, 50=16.43% 00:10:44.591 lat (msec) : 100=1.85% 00:10:44.591 cpu : usr=3.88%, sys=4.48%, ctx=420, majf=0, minf=1 00:10:44.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:44.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.591 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.591 job2: (groupid=0, jobs=1): err= 0: pid=726213: Wed Jul 24 22:59:02 2024 00:10:44.591 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:10:44.591 slat (nsec): min=946, max=24350k, avg=196496.60, stdev=1417625.22 00:10:44.591 clat (msec): min=11, max=106, avg=23.30, stdev=16.34 00:10:44.591 lat (msec): min=11, max=106, avg=23.50, stdev=16.48 00:10:44.591 clat percentiles (msec): 00:10:44.591 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:10:44.591 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:10:44.591 | 70.00th=[ 22], 80.00th=[ 27], 90.00th=[ 40], 95.00th=[ 73], 00:10:44.591 | 99.00th=[ 88], 99.50th=[ 88], 99.90th=[ 88], 99.95th=[ 99], 00:10:44.591 | 99.99th=[ 107] 00:10:44.591 write: IOPS=2403, BW=9615KiB/s (9846kB/s)(9692KiB/1008msec); 0 zone resets 00:10:44.591 slat (nsec): min=1620, max=22902k, avg=242086.45, stdev=1303698.92 00:10:44.591 clat (usec): min=4486, max=89840, avg=32694.24, stdev=14320.29 00:10:44.591 lat (usec): min=10577, max=89864, avg=32936.32, stdev=14410.34 00:10:44.591 clat percentiles (usec): 00:10:44.591 | 1.00th=[14484], 5.00th=[18220], 10.00th=[19530], 20.00th=[20841], 00:10:44.591 | 30.00th=[23987], 40.00th=[26084], 50.00th=[27657], 60.00th=[32113], 00:10:44.591 | 70.00th=[36439], 80.00th=[42206], 90.00th=[55313], 95.00th=[67634], 00:10:44.591 | 99.00th=[68682], 99.50th=[71828], 99.90th=[72877], 99.95th=[87557], 00:10:44.591 | 99.99th=[89654] 00:10:44.591 bw ( KiB/s): min= 6744, max=11616, per=11.39%, avg=9180.00, stdev=3445.02, samples=2 00:10:44.591 iops : min= 1686, max= 2904, avg=2295.00, stdev=861.26, samples=2 00:10:44.591 lat (msec) : 10=0.02%, 20=37.44%, 50=52.45%, 100=10.06%, 250=0.02% 00:10:44.591 cpu : usr=1.39%, sys=2.88%, ctx=269, majf=0, minf=1 00:10:44.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:44.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.591 issued rwts: total=2048,2423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.591 job3: (groupid=0, jobs=1): err= 0: pid=726214: Wed Jul 24 22:59:02 2024 00:10:44.591 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:44.591 slat (nsec): min=964, max=28186k, avg=160294.07, stdev=1168451.62 00:10:44.591 clat (usec): min=5501, max=49494, avg=18582.72, stdev=7858.59 00:10:44.591 lat (usec): min=5506, max=49498, avg=18743.01, stdev=7960.06 00:10:44.591 clat percentiles (usec): 00:10:44.591 | 1.00th=[ 6980], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 00:10:44.591 | 30.00th=[12125], 40.00th=[14877], 50.00th=[18482], 60.00th=[19006], 00:10:44.591 | 70.00th=[21103], 80.00th=[23200], 90.00th=[29492], 95.00th=[32113], 00:10:44.591 | 99.00th=[45876], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:10:44.591 | 99.99th=[49546] 00:10:44.591 write: IOPS=2629, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1005msec); 0 zone resets 00:10:44.591 slat (nsec): min=1630, max=14169k, avg=218370.26, stdev=978716.85 00:10:44.591 clat (usec): min=1190, max=100137, avg=30217.14, stdev=16854.84 00:10:44.591 lat (usec): min=1201, max=100146, avg=30435.51, stdev=16943.87 00:10:44.591 clat percentiles (msec): 00:10:44.591 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 17], 00:10:44.591 | 30.00th=[ 22], 40.00th=[ 27], 50.00th=[ 29], 60.00th=[ 33], 00:10:44.591 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 53], 95.00th=[ 61], 00:10:44.591 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 101], 99.95th=[ 101], 00:10:44.591 | 99.99th=[ 101] 00:10:44.591 bw ( KiB/s): min= 9616, max=10912, per=12.73%, avg=10264.00, stdev=916.41, samples=2 00:10:44.591 iops : min= 2404, max= 2728, avg=2566.00, stdev=229.10, samples=2 00:10:44.591 lat (msec) : 2=0.35%, 4=0.25%, 10=3.56%, 20=39.94%, 50=50.43% 00:10:44.591 lat (msec) : 100=5.34%, 250=0.13% 00:10:44.591 cpu : usr=2.29%, sys=2.69%, ctx=316, majf=0, minf=1 00:10:44.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:44.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.591 issued rwts: total=2560,2643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.591 00:10:44.591 Run status group 0 (all jobs): 00:10:44.591 READ: bw=73.9MiB/s (77.5MB/s), 8127KiB/s-37.8MiB/s (8322kB/s-39.6MB/s), io=74.5MiB (78.1MB), run=1005-1008msec 00:10:44.591 WRITE: bw=78.7MiB/s (82.6MB/s), 9615KiB/s-39.4MiB/s (9846kB/s-41.3MB/s), io=79.4MiB (83.2MB), run=1005-1008msec 00:10:44.591 00:10:44.591 Disk stats (read/write): 00:10:44.591 nvme0n1: ios=8279/8704, merge=0/0, ticks=25290/24812, in_queue=50102, util=87.98% 00:10:44.591 nvme0n2: ios=3235/3584, merge=0/0, ticks=44405/59600, in_queue=104005, util=98.47% 00:10:44.591 nvme0n3: ios=1656/2048, merge=0/0, ticks=20556/31359, in_queue=51915, util=91.46% 00:10:44.591 nvme0n4: ios=2105/2151, merge=0/0, ticks=39020/63409, in_queue=102429, util=97.76% 00:10:44.591 22:59:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:44.591 [global] 00:10:44.591 thread=1 00:10:44.591 invalidate=1 00:10:44.591 rw=randwrite 00:10:44.591 time_based=1 00:10:44.591 runtime=1 00:10:44.591 ioengine=libaio 00:10:44.591 direct=1 00:10:44.591 bs=4096 00:10:44.591 iodepth=128 00:10:44.591 norandommap=0 00:10:44.591 numjobs=1 00:10:44.591 00:10:44.591 verify_dump=1 00:10:44.591 verify_backlog=512 00:10:44.591 verify_state_save=0 00:10:44.591 do_verify=1 00:10:44.591 verify=crc32c-intel 00:10:44.591 [job0] 00:10:44.591 filename=/dev/nvme0n1 00:10:44.591 [job1] 00:10:44.591 filename=/dev/nvme0n2 00:10:44.591 [job2] 00:10:44.591 filename=/dev/nvme0n3 00:10:44.591 [job3] 00:10:44.591 filename=/dev/nvme0n4 00:10:44.591 Could not set queue depth (nvme0n1) 00:10:44.591 Could not set queue depth (nvme0n2) 00:10:44.591 Could not set queue depth (nvme0n3) 00:10:44.591 Could not set queue depth (nvme0n4) 00:10:44.852 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.852 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.852 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.852 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.852 fio-3.35 00:10:44.852 Starting 4 threads 00:10:46.236 00:10:46.236 job0: (groupid=0, jobs=1): err= 0: pid=726736: Wed Jul 24 22:59:03 2024 00:10:46.236 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:10:46.236 slat (nsec): min=879, max=19567k, avg=107139.66, stdev=685964.02 00:10:46.236 clat (usec): min=4665, max=44728, avg=13618.24, stdev=4960.06 00:10:46.236 lat (usec): min=4670, max=46652, avg=13725.38, stdev=4995.07 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9896], 00:10:46.236 | 30.00th=[10552], 40.00th=[11994], 50.00th=[12911], 60.00th=[14091], 00:10:46.236 | 70.00th=[14615], 80.00th=[16188], 90.00th=[20055], 95.00th=[21890], 00:10:46.236 | 99.00th=[34341], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 00:10:46.236 | 99.99th=[44827] 00:10:46.236 write: IOPS=4946, BW=19.3MiB/s (20.3MB/s)(19.6MiB/1012msec); 0 zone resets 00:10:46.236 slat (nsec): min=1517, max=13626k, avg=96781.07, stdev=557595.20 00:10:46.236 clat (usec): min=1333, max=51396, avg=13096.30, stdev=7840.45 00:10:46.236 lat (usec): min=1344, max=51399, avg=13193.09, stdev=7882.57 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 5145], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8094], 00:10:46.236 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11469], 00:10:46.236 | 70.00th=[14222], 80.00th=[16319], 90.00th=[21365], 95.00th=[34341], 00:10:46.236 | 99.00th=[41157], 99.50th=[45351], 99.90th=[51119], 99.95th=[51643], 00:10:46.236 | 99.99th=[51643] 00:10:46.236 bw ( KiB/s): min=18544, max=20480, per=22.22%, avg=19512.00, stdev=1368.96, samples=2 00:10:46.236 iops : min= 4636, max= 5120, avg=4878.00, stdev=342.24, samples=2 00:10:46.236 lat (msec) : 2=0.18%, 4=0.06%, 10=35.30%, 20=53.79%, 50=10.61% 00:10:46.236 lat (msec) : 100=0.06% 00:10:46.236 cpu : usr=3.46%, sys=4.25%, ctx=534, majf=0, minf=1 00:10:46.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.236 issued rwts: total=4608,5006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.236 job1: (groupid=0, jobs=1): err= 0: pid=726737: Wed Jul 24 22:59:03 2024 00:10:46.236 read: IOPS=7179, BW=28.0MiB/s (29.4MB/s)(28.3MiB/1008msec) 00:10:46.236 slat (nsec): min=909, max=8200.2k, avg=67633.62, stdev=414825.30 00:10:46.236 clat (usec): min=2523, max=42036, avg=9243.00, stdev=4600.01 00:10:46.236 lat (usec): min=2532, max=42042, avg=9310.63, stdev=4605.16 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 3982], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6980], 00:10:46.236 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8717], 00:10:46.236 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[12256], 95.00th=[14877], 00:10:46.236 | 99.00th=[39060], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:10:46.236 | 99.99th=[42206] 00:10:46.236 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:10:46.236 slat (nsec): min=1522, max=34211k, avg=61866.57, stdev=516819.71 00:10:46.236 clat (usec): min=1023, max=38351, avg=7934.28, stdev=2580.05 00:10:46.236 lat (usec): min=1035, max=39454, avg=7996.14, stdev=2608.64 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 2671], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6587], 00:10:46.236 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7963], 00:10:46.236 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[12125], 00:10:46.236 | 99.00th=[15139], 99.50th=[18220], 99.90th=[38011], 99.95th=[38011], 00:10:46.236 | 99.99th=[38536] 00:10:46.236 bw ( KiB/s): min=30000, max=30968, per=34.72%, avg=30484.00, stdev=684.48, samples=2 00:10:46.236 iops : min= 7500, max= 7742, avg=7621.00, stdev=171.12, samples=2 00:10:46.236 lat (msec) : 2=0.16%, 4=2.08%, 10=78.48%, 20=18.28%, 50=0.99% 00:10:46.236 cpu : usr=3.97%, sys=7.25%, ctx=684, majf=0, minf=1 00:10:46.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.236 issued rwts: total=7237,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.236 job2: (groupid=0, jobs=1): err= 0: pid=726738: Wed Jul 24 22:59:03 2024 00:10:46.236 read: IOPS=4261, BW=16.6MiB/s (17.5MB/s)(16.7MiB/1003msec) 00:10:46.236 slat (nsec): min=928, max=11715k, avg=122469.48, stdev=725746.88 00:10:46.236 clat (usec): min=2457, max=40169, avg=15630.78, stdev=5971.84 00:10:46.236 lat (usec): min=2463, max=40177, avg=15753.24, stdev=5992.26 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 6390], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10683], 00:10:46.236 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13829], 60.00th=[15664], 00:10:46.236 | 70.00th=[19268], 80.00th=[21627], 90.00th=[22676], 95.00th=[26608], 00:10:46.236 | 99.00th=[32637], 99.50th=[34341], 99.90th=[40109], 99.95th=[40109], 00:10:46.236 | 99.99th=[40109] 00:10:46.236 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:46.236 slat (nsec): min=1558, max=8763.5k, avg=98558.07, stdev=535152.01 00:10:46.236 clat (usec): min=5074, max=39510, avg=12889.03, stdev=5377.63 00:10:46.236 lat (usec): min=5095, max=39520, avg=12987.58, stdev=5390.60 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 6980], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:46.236 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11207], 60.00th=[12780], 00:10:46.236 | 70.00th=[14353], 80.00th=[16057], 90.00th=[17957], 95.00th=[22676], 00:10:46.236 | 99.00th=[36439], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:10:46.236 | 99.99th=[39584] 00:10:46.236 bw ( KiB/s): min=16384, max=20480, per=20.99%, avg=18432.00, stdev=2896.31, samples=2 00:10:46.236 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:46.236 lat (msec) : 4=0.26%, 10=26.19%, 20=57.00%, 50=16.55% 00:10:46.236 cpu : usr=2.59%, sys=4.69%, ctx=448, majf=0, minf=1 00:10:46.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.236 issued rwts: total=4274,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.236 job3: (groupid=0, jobs=1): err= 0: pid=726739: Wed Jul 24 22:59:03 2024 00:10:46.236 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:10:46.236 slat (nsec): min=934, max=15732k, avg=103901.36, stdev=764614.23 00:10:46.236 clat (usec): min=2816, max=58707, avg=14570.71, stdev=9434.13 00:10:46.236 lat (usec): min=3573, max=58734, avg=14674.61, stdev=9503.14 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8029], 00:10:46.236 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[11863], 00:10:46.236 | 70.00th=[15664], 80.00th=[23725], 90.00th=[27395], 95.00th=[31589], 00:10:46.236 | 99.00th=[45351], 99.50th=[45351], 99.90th=[50070], 99.95th=[50594], 00:10:46.236 | 99.99th=[58459] 00:10:46.236 write: IOPS=4875, BW=19.0MiB/s (20.0MB/s)(19.2MiB/1009msec); 0 zone resets 00:10:46.236 slat (nsec): min=1568, max=12456k, avg=89558.63, stdev=570194.55 00:10:46.236 clat (usec): min=1000, max=48948, avg=12382.27, stdev=7692.26 00:10:46.236 lat (usec): min=1009, max=48951, avg=12471.83, stdev=7751.94 00:10:46.236 clat percentiles (usec): 00:10:46.236 | 1.00th=[ 2999], 5.00th=[ 4293], 10.00th=[ 5800], 20.00th=[ 7177], 00:10:46.236 | 30.00th=[ 7767], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11600], 00:10:46.236 | 70.00th=[14222], 80.00th=[16909], 90.00th=[21365], 95.00th=[29492], 00:10:46.236 | 99.00th=[40109], 99.50th=[43254], 99.90th=[49021], 99.95th=[49021], 00:10:46.236 | 99.99th=[49021] 00:10:46.236 bw ( KiB/s): min=10256, max=28072, per=21.83%, avg=19164.00, stdev=12597.81, samples=2 00:10:46.236 iops : min= 2564, max= 7018, avg=4791.00, stdev=3149.45, samples=2 00:10:46.236 lat (msec) : 2=0.03%, 4=1.92%, 10=51.08%, 20=27.83%, 50=18.95% 00:10:46.236 lat (msec) : 100=0.20% 00:10:46.236 cpu : usr=3.27%, sys=5.75%, ctx=395, majf=0, minf=1 00:10:46.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.236 issued rwts: total=4608,4919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.236 00:10:46.236 Run status group 0 (all jobs): 00:10:46.236 READ: bw=80.0MiB/s (83.9MB/s), 16.6MiB/s-28.0MiB/s (17.5MB/s-29.4MB/s), io=81.0MiB (84.9MB), run=1003-1012msec 00:10:46.236 WRITE: bw=85.7MiB/s (89.9MB/s), 17.9MiB/s-29.8MiB/s (18.8MB/s-31.2MB/s), io=86.8MiB (91.0MB), run=1003-1012msec 00:10:46.236 00:10:46.236 Disk stats (read/write): 00:10:46.236 nvme0n1: ios=4115/4298, merge=0/0, ticks=20976/17943, in_queue=38919, util=82.57% 00:10:46.236 nvme0n2: ios=6197/6278, merge=0/0, ticks=25690/23228, in_queue=48918, util=88.69% 00:10:46.236 nvme0n3: ios=3259/3584, merge=0/0, ticks=16293/14206, in_queue=30499, util=93.36% 00:10:46.236 nvme0n4: ios=4346/4608, merge=0/0, ticks=28437/26322, in_queue=54759, util=93.92% 00:10:46.236 22:59:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:46.236 22:59:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=727068 00:10:46.236 22:59:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:46.236 22:59:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:46.236 [global] 00:10:46.236 thread=1 00:10:46.236 invalidate=1 00:10:46.236 rw=read 00:10:46.236 time_based=1 00:10:46.236 runtime=10 00:10:46.236 ioengine=libaio 00:10:46.237 direct=1 00:10:46.237 bs=4096 00:10:46.237 iodepth=1 00:10:46.237 norandommap=1 00:10:46.237 numjobs=1 00:10:46.237 00:10:46.237 [job0] 00:10:46.237 filename=/dev/nvme0n1 00:10:46.237 [job1] 00:10:46.237 filename=/dev/nvme0n2 00:10:46.237 [job2] 00:10:46.237 filename=/dev/nvme0n3 00:10:46.237 [job3] 00:10:46.237 filename=/dev/nvme0n4 00:10:46.237 Could not set queue depth (nvme0n1) 00:10:46.237 Could not set queue depth (nvme0n2) 00:10:46.237 Could not set queue depth (nvme0n3) 00:10:46.237 Could not set queue depth (nvme0n4) 00:10:46.496 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.496 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.496 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.496 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.496 fio-3.35 00:10:46.496 Starting 4 threads 00:10:49.796 22:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:49.796 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8892416, buflen=4096 00:10:49.796 fio: pid=727263, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:49.796 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10604544, buflen=4096 00:10:49.796 fio: pid=727262, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:49.796 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=13418496, buflen=4096 00:10:49.796 fio: pid=727257, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:49.796 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6107136, buflen=4096 00:10:49.796 fio: pid=727261, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.796 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:49.796 00:10:49.796 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=727257: Wed Jul 24 22:59:07 2024 00:10:49.796 read: IOPS=1128, BW=4511KiB/s (4619kB/s)(12.8MiB/2905msec) 00:10:49.796 slat (usec): min=6, max=19747, avg=34.24, stdev=433.59 00:10:49.796 clat (usec): min=196, max=43004, avg=846.31, stdev=1042.77 00:10:49.796 lat (usec): min=203, max=43028, avg=880.55, stdev=1129.79 00:10:49.796 clat percentiles (usec): 00:10:49.796 | 1.00th=[ 474], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 701], 00:10:49.796 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[ 832], 60.00th=[ 873], 00:10:49.796 | 70.00th=[ 906], 80.00th=[ 947], 90.00th=[ 988], 95.00th=[ 1020], 00:10:49.796 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1401], 99.95th=[42206], 00:10:49.796 | 99.99th=[43254] 00:10:49.796 bw ( KiB/s): min= 4392, max= 4944, per=37.71%, avg=4699.20, stdev=262.09, samples=5 00:10:49.796 iops : min= 1098, max= 1236, avg=1174.80, stdev=65.52, samples=5 00:10:49.796 lat (usec) : 250=0.15%, 500=1.34%, 750=29.42%, 1000=61.64% 00:10:49.796 lat (msec) : 2=7.32%, 4=0.03%, 50=0.06% 00:10:49.796 cpu : usr=1.41%, sys=2.93%, ctx=3279, majf=0, minf=1 00:10:49.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 issued rwts: total=3277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.796 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=727261: Wed Jul 24 22:59:07 2024 00:10:49.796 read: IOPS=487, BW=1950KiB/s (1997kB/s)(5964KiB/3058msec) 00:10:49.796 slat (usec): min=6, max=7589, avg=28.43, stdev=196.07 00:10:49.796 clat (usec): min=439, max=43096, avg=2016.03, stdev=6744.53 00:10:49.796 lat (usec): min=450, max=43121, avg=2039.39, stdev=6745.89 00:10:49.796 clat percentiles (usec): 00:10:49.796 | 1.00th=[ 553], 5.00th=[ 660], 10.00th=[ 717], 20.00th=[ 750], 00:10:49.796 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 881], 00:10:49.796 | 70.00th=[ 914], 80.00th=[ 955], 90.00th=[ 1237], 95.00th=[ 1516], 00:10:49.796 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:49.796 | 99.99th=[43254] 00:10:49.796 bw ( KiB/s): min= 104, max= 4728, per=18.97%, avg=2364.80, stdev=2255.71, samples=5 00:10:49.796 iops : min= 26, max= 1182, avg=591.20, stdev=563.93, samples=5 00:10:49.796 lat (usec) : 500=0.54%, 750=19.71%, 1000=66.02% 00:10:49.796 lat (msec) : 2=10.86%, 50=2.82% 00:10:49.796 cpu : usr=0.39%, sys=1.44%, ctx=1496, majf=0, minf=1 00:10:49.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 issued rwts: total=1492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.796 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=727262: Wed Jul 24 22:59:07 2024 00:10:49.796 read: IOPS=951, BW=3803KiB/s (3894kB/s)(10.1MiB/2723msec) 00:10:49.796 slat (usec): min=6, max=15815, avg=36.43, stdev=399.46 00:10:49.796 clat (usec): min=502, max=1435, avg=1008.43, stdev=112.56 00:10:49.796 lat (usec): min=527, max=16818, avg=1044.86, stdev=415.28 00:10:49.796 clat percentiles (usec): 00:10:49.796 | 1.00th=[ 693], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:10:49.796 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:10:49.796 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1254], 00:10:49.796 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1418], 99.95th=[ 1418], 00:10:49.796 | 99.99th=[ 1434] 00:10:49.796 bw ( KiB/s): min= 3768, max= 3968, per=30.81%, avg=3840.00, stdev=85.98, samples=5 00:10:49.796 iops : min= 942, max= 992, avg=960.00, stdev=21.49, samples=5 00:10:49.796 lat (usec) : 750=1.54%, 1000=46.02% 00:10:49.796 lat (msec) : 2=52.39% 00:10:49.796 cpu : usr=0.96%, sys=2.94%, ctx=2592, majf=0, minf=1 00:10:49.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 issued rwts: total=2590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.796 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=727263: Wed Jul 24 22:59:07 2024 00:10:49.796 read: IOPS=850, BW=3400KiB/s (3482kB/s)(8684KiB/2554msec) 00:10:49.796 slat (nsec): min=7115, max=59845, avg=24846.69, stdev=2887.92 00:10:49.796 clat (usec): min=735, max=1640, avg=1144.72, stdev=145.05 00:10:49.796 lat (usec): min=760, max=1665, avg=1169.57, stdev=144.93 00:10:49.796 clat percentiles (usec): 00:10:49.796 | 1.00th=[ 832], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1020], 00:10:49.796 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1139], 60.00th=[ 1172], 00:10:49.796 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1336], 95.00th=[ 1401], 00:10:49.796 | 99.00th=[ 1532], 99.50th=[ 1565], 99.90th=[ 1614], 99.95th=[ 1631], 00:10:49.796 | 99.99th=[ 1647] 00:10:49.796 bw ( KiB/s): min= 3216, max= 3512, per=27.29%, avg=3401.60, stdev=111.48, samples=5 00:10:49.796 iops : min= 804, max= 878, avg=850.40, stdev=27.87, samples=5 00:10:49.796 lat (usec) : 750=0.05%, 1000=14.64% 00:10:49.796 lat (msec) : 2=85.27% 00:10:49.796 cpu : usr=0.94%, sys=2.47%, ctx=2173, majf=0, minf=2 00:10:49.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.796 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.796 00:10:49.796 Run status group 0 (all jobs): 00:10:49.796 READ: bw=12.2MiB/s (12.8MB/s), 1950KiB/s-4511KiB/s (1997kB/s-4619kB/s), io=37.2MiB (39.0MB), run=2554-3058msec 00:10:49.796 00:10:49.796 Disk stats (read/write): 00:10:49.796 nvme0n1: ios=3255/0, merge=0/0, ticks=2630/0, in_queue=2630, util=93.69% 00:10:49.796 nvme0n2: ios=1486/0, merge=0/0, ticks=2777/0, in_queue=2777, util=95.33% 00:10:49.796 nvme0n3: ios=2483/0, merge=0/0, ticks=2387/0, in_queue=2387, util=96.03% 00:10:49.796 nvme0n4: ios=1985/0, merge=0/0, ticks=2213/0, in_queue=2213, util=96.02% 00:10:50.057 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.057 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.317 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.318 22:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:50.318 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.318 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:50.579 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.579 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 727068 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:50.840 nvmf hotplug test: fio failed as expected 00:10:50.840 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.101 rmmod nvme_tcp 00:10:51.101 rmmod nvme_fabrics 00:10:51.101 rmmod nvme_keyring 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 723444 ']' 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 723444 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 723444 ']' 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 723444 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 723444 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 723444' 00:10:51.101 killing process with pid 723444 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 723444 00:10:51.101 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 723444 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.362 22:59:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.276 00:10:53.276 real 0m29.508s 00:10:53.276 user 2m35.579s 00:10:53.276 sys 0m10.127s 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.276 ************************************ 00:10:53.276 END TEST nvmf_fio_target 00:10:53.276 ************************************ 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.276 22:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.537 ************************************ 00:10:53.537 START TEST nvmf_bdevio 00:10:53.537 ************************************ 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:53.537 * Looking for test storage... 00:10:53.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:53.537 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.538 22:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:01.681 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:01.681 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:01.681 Found net devices under 0000:31:00.0: cvl_0_0 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:01.681 Found net devices under 0000:31:00.1: cvl_0_1 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:11:01.681 00:11:01.681 --- 10.0.0.2 ping statistics --- 00:11:01.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.681 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:11:01.681 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:11:01.681 00:11:01.681 --- 10.0.0.1 ping statistics --- 00:11:01.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.682 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=732655 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 732655 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 732655 ']' 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.682 22:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.682 [2024-07-24 22:59:18.867182] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:01.682 [2024-07-24 22:59:18.867283] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.682 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.682 [2024-07-24 22:59:18.968598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.682 [2024-07-24 22:59:19.060713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.682 [2024-07-24 22:59:19.060777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.682 [2024-07-24 22:59:19.060786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.682 [2024-07-24 22:59:19.060793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.682 [2024-07-24 22:59:19.060799] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.682 [2024-07-24 22:59:19.060977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:01.682 [2024-07-24 22:59:19.061138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:01.682 [2024-07-24 22:59:19.061300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.682 [2024-07-24 22:59:19.061300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.943 [2024-07-24 22:59:19.708464] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.943 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.204 Malloc0 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.204 [2024-07-24 22:59:19.765448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:02.204 { 00:11:02.204 "params": { 00:11:02.204 "name": "Nvme$subsystem", 00:11:02.204 "trtype": "$TEST_TRANSPORT", 00:11:02.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.204 "adrfam": "ipv4", 00:11:02.204 "trsvcid": "$NVMF_PORT", 00:11:02.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.204 "hdgst": ${hdgst:-false}, 00:11:02.204 "ddgst": ${ddgst:-false} 00:11:02.204 }, 00:11:02.204 "method": "bdev_nvme_attach_controller" 00:11:02.204 } 00:11:02.204 EOF 00:11:02.204 )") 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:02.204 22:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:02.204 "params": { 00:11:02.204 "name": "Nvme1", 00:11:02.204 "trtype": "tcp", 00:11:02.204 "traddr": "10.0.0.2", 00:11:02.204 "adrfam": "ipv4", 00:11:02.204 "trsvcid": "4420", 00:11:02.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:02.204 "hdgst": false, 00:11:02.204 "ddgst": false 00:11:02.204 }, 00:11:02.204 "method": "bdev_nvme_attach_controller" 00:11:02.204 }' 00:11:02.204 [2024-07-24 22:59:19.818967] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:02.204 [2024-07-24 22:59:19.819034] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732989 ] 00:11:02.204 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.204 [2024-07-24 22:59:19.892505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.204 [2024-07-24 22:59:19.968204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.204 [2024-07-24 22:59:19.968327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.204 [2024-07-24 22:59:19.968330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.465 I/O targets: 00:11:02.465 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:02.465 00:11:02.465 00:11:02.465 CUnit - A unit testing framework for C - Version 2.1-3 00:11:02.465 http://cunit.sourceforge.net/ 00:11:02.465 00:11:02.465 00:11:02.465 Suite: bdevio tests on: Nvme1n1 00:11:02.465 Test: blockdev write read block ...passed 00:11:02.465 Test: blockdev write zeroes read block ...passed 00:11:02.465 Test: blockdev write zeroes read no split ...passed 00:11:02.465 Test: blockdev write zeroes read split ...passed 00:11:02.465 Test: blockdev write zeroes read split partial ...passed 00:11:02.465 Test: blockdev reset ...[2024-07-24 22:59:20.249062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:02.465 [2024-07-24 22:59:20.249128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b79370 (9): Bad file descriptor 00:11:02.726 [2024-07-24 22:59:20.263494] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:02.726 passed 00:11:02.726 Test: blockdev write read 8 blocks ...passed 00:11:02.726 Test: blockdev write read size > 128k ...passed 00:11:02.726 Test: blockdev write read invalid size ...passed 00:11:02.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:02.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:02.726 Test: blockdev write read max offset ...passed 00:11:02.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:02.726 Test: blockdev writev readv 8 blocks ...passed 00:11:02.726 Test: blockdev writev readv 30 x 1block ...passed 00:11:02.726 Test: blockdev writev readv block ...passed 00:11:02.726 Test: blockdev writev readv size > 128k ...passed 00:11:02.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:02.726 Test: blockdev comparev and writev ...[2024-07-24 22:59:20.491038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.491063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.491075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.491081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.491631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.491639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.491649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.491654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.492227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.492235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.492244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.492249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.492785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.492792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:02.726 [2024-07-24 22:59:20.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.726 [2024-07-24 22:59:20.492807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:02.987 passed 00:11:02.987 Test: blockdev nvme passthru rw ...passed 00:11:02.987 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:59:20.577776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.987 [2024-07-24 22:59:20.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:02.987 [2024-07-24 22:59:20.578272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.987 [2024-07-24 22:59:20.578279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:02.987 [2024-07-24 22:59:20.578683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.987 [2024-07-24 22:59:20.578694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:02.987 [2024-07-24 22:59:20.579100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.987 [2024-07-24 22:59:20.579108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:02.987 passed 00:11:02.987 Test: blockdev nvme admin passthru ...passed 00:11:02.987 Test: blockdev copy ...passed 00:11:02.987 00:11:02.987 Run Summary: Type Total Ran Passed Failed Inactive 00:11:02.987 suites 1 1 n/a 0 0 00:11:02.987 tests 23 23 23 0 0 00:11:02.987 asserts 152 152 152 0 n/a 00:11:02.987 00:11:02.987 Elapsed time = 1.040 seconds 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.987 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.987 rmmod nvme_tcp 00:11:03.248 rmmod nvme_fabrics 00:11:03.248 rmmod nvme_keyring 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 732655 ']' 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 732655 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 732655 ']' 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 732655 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 732655 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 732655' 00:11:03.249 killing process with pid 732655 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 732655 00:11:03.249 22:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 732655 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.509 22:59:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.421 00:11:05.421 real 0m12.051s 00:11:05.421 user 0m11.786s 00:11:05.421 sys 0m6.262s 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.421 ************************************ 00:11:05.421 END TEST nvmf_bdevio 00:11:05.421 ************************************ 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:05.421 00:11:05.421 real 5m7.617s 00:11:05.421 user 11m37.547s 00:11:05.421 sys 1m51.384s 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.421 22:59:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.421 ************************************ 00:11:05.421 END TEST nvmf_target_core 00:11:05.421 ************************************ 00:11:05.683 22:59:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.683 22:59:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.683 22:59:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.683 22:59:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 ************************************ 00:11:05.683 START TEST nvmf_target_extra 00:11:05.683 ************************************ 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.683 * Looking for test storage... 00:11:05.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.683 ************************************ 00:11:05.683 START TEST nvmf_example 00:11:05.683 ************************************ 00:11:05.683 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.944 * Looking for test storage... 00:11:05.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.944 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.945 22:59:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:14.156 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:14.156 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:14.156 Found net devices under 0000:31:00.0: cvl_0_0 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:14.156 Found net devices under 0000:31:00.1: cvl_0_1 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.156 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:11:14.157 00:11:14.157 --- 10.0.0.2 ping statistics --- 00:11:14.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.157 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:11:14.157 00:11:14.157 --- 10.0.0.1 ping statistics --- 00:11:14.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.157 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=738055 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 738055 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 738055 ']' 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.157 22:59:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.157 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.098 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:15.099 22:59:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:15.099 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.333 Initializing NVMe Controllers 00:11:27.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:27.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:27.333 Initialization complete. Launching workers. 00:11:27.333 ======================================================== 00:11:27.333 Latency(us) 00:11:27.333 Device Information : IOPS MiB/s Average min max 00:11:27.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17972.69 70.21 3561.10 839.39 16312.36 00:11:27.333 ======================================================== 00:11:27.333 Total : 17972.69 70.21 3561.10 839.39 16312.36 00:11:27.333 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.333 rmmod nvme_tcp 00:11:27.333 rmmod nvme_fabrics 00:11:27.333 rmmod nvme_keyring 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 738055 ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 738055 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 738055 ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 738055 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 738055 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 738055' 00:11:27.333 killing process with pid 738055 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 738055 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 738055 00:11:27.333 nvmf threads initialize successfully 00:11:27.333 bdev subsystem init successfully 00:11:27.333 created a nvmf target service 00:11:27.333 create targets's poll groups done 00:11:27.333 all subsystems of target started 00:11:27.333 nvmf target is running 00:11:27.333 all subsystems of target stopped 00:11:27.333 destroy targets's poll groups done 00:11:27.333 destroyed the nvmf target service 00:11:27.333 bdev subsystem finish successfully 00:11:27.333 nvmf threads destroy successfully 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.333 22:59:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.907 00:11:27.907 real 0m22.000s 00:11:27.907 user 0m47.089s 00:11:27.907 sys 0m7.149s 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.907 ************************************ 00:11:27.907 END TEST nvmf_example 00:11:27.907 ************************************ 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.907 ************************************ 00:11:27.907 START TEST nvmf_filesystem 00:11:27.907 ************************************ 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:27.907 * Looking for test storage... 00:11:27.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:27.907 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:27.908 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:27.908 #define SPDK_CONFIG_H 00:11:27.908 #define SPDK_CONFIG_APPS 1 00:11:27.908 #define SPDK_CONFIG_ARCH native 00:11:27.908 #undef SPDK_CONFIG_ASAN 00:11:27.908 #undef SPDK_CONFIG_AVAHI 00:11:27.908 #undef SPDK_CONFIG_CET 00:11:27.908 #define SPDK_CONFIG_COVERAGE 1 00:11:27.908 #define SPDK_CONFIG_CROSS_PREFIX 00:11:27.908 #undef SPDK_CONFIG_CRYPTO 00:11:27.908 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:27.908 #undef SPDK_CONFIG_CUSTOMOCF 00:11:27.908 #undef SPDK_CONFIG_DAOS 00:11:27.908 #define SPDK_CONFIG_DAOS_DIR 00:11:27.908 #define SPDK_CONFIG_DEBUG 1 00:11:27.908 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:27.908 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:27.908 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:27.908 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:27.908 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:27.908 #undef SPDK_CONFIG_DPDK_UADK 00:11:27.908 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:27.908 #define SPDK_CONFIG_EXAMPLES 1 00:11:27.908 #undef SPDK_CONFIG_FC 00:11:27.908 #define SPDK_CONFIG_FC_PATH 00:11:27.908 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:27.908 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:27.908 #undef SPDK_CONFIG_FUSE 00:11:27.908 #undef SPDK_CONFIG_FUZZER 00:11:27.908 #define SPDK_CONFIG_FUZZER_LIB 00:11:27.908 #undef SPDK_CONFIG_GOLANG 00:11:27.908 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:27.908 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:27.908 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:27.908 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:27.908 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:27.908 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:27.908 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:27.908 #define SPDK_CONFIG_IDXD 1 00:11:27.908 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:27.908 #undef SPDK_CONFIG_IPSEC_MB 00:11:27.908 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:27.908 #define SPDK_CONFIG_ISAL 1 00:11:27.908 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:27.908 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:27.908 #define SPDK_CONFIG_LIBDIR 00:11:27.908 #undef SPDK_CONFIG_LTO 00:11:27.908 #define SPDK_CONFIG_MAX_LCORES 128 00:11:27.908 #define SPDK_CONFIG_NVME_CUSE 1 00:11:27.908 #undef SPDK_CONFIG_OCF 00:11:27.908 #define SPDK_CONFIG_OCF_PATH 00:11:27.908 #define SPDK_CONFIG_OPENSSL_PATH 00:11:27.908 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:27.908 #define SPDK_CONFIG_PGO_DIR 00:11:27.908 #undef SPDK_CONFIG_PGO_USE 00:11:27.908 #define SPDK_CONFIG_PREFIX /usr/local 00:11:27.908 #undef SPDK_CONFIG_RAID5F 00:11:27.908 #undef SPDK_CONFIG_RBD 00:11:27.908 #define SPDK_CONFIG_RDMA 1 00:11:27.908 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:27.908 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:27.908 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:27.908 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:27.908 #define SPDK_CONFIG_SHARED 1 00:11:27.908 #undef SPDK_CONFIG_SMA 00:11:27.908 #define SPDK_CONFIG_TESTS 1 00:11:27.908 #undef SPDK_CONFIG_TSAN 00:11:27.908 #define SPDK_CONFIG_UBLK 1 00:11:27.908 #define SPDK_CONFIG_UBSAN 1 00:11:27.908 #undef SPDK_CONFIG_UNIT_TESTS 00:11:27.908 #undef SPDK_CONFIG_URING 00:11:27.908 #define SPDK_CONFIG_URING_PATH 00:11:27.908 #undef SPDK_CONFIG_URING_ZNS 00:11:27.908 #undef SPDK_CONFIG_USDT 00:11:27.908 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:27.908 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:27.908 #define SPDK_CONFIG_VFIO_USER 1 00:11:27.908 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:27.908 #define SPDK_CONFIG_VHOST 1 00:11:27.908 #define SPDK_CONFIG_VIRTIO 1 00:11:27.908 #undef SPDK_CONFIG_VTUNE 00:11:27.908 #define SPDK_CONFIG_VTUNE_DIR 00:11:27.908 #define SPDK_CONFIG_WERROR 1 00:11:27.908 #define SPDK_CONFIG_WPDK_DIR 00:11:27.908 #undef SPDK_CONFIG_XNVME 00:11:27.908 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:27.909 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:27.910 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:28.172 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 740851 ]] 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 740851 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:28.173 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.ETt6Yr 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ETt6Yr/tests/target /tmp/spdk.ETt6Yr 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953012224 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4331417600 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=122852589568 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370992640 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6518403072 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623312896 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685494272 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850593280 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23605248 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=353280 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=150528 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64684675072 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685498368 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=823296 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:28.174 * Looking for test storage... 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=122852589568 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8732995584 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:28.174 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.175 22:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.318 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:36.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:36.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:36.319 Found net devices under 0000:31:00.0: cvl_0_0 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:36.319 Found net devices under 0000:31:00.1: cvl_0_1 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.319 22:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.319 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.319 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.319 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.319 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:11:36.580 00:11:36.580 --- 10.0.0.2 ping statistics --- 00:11:36.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.580 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:11:36.580 00:11:36.580 --- 10.0.0.1 ping statistics --- 00:11:36.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.580 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 ************************************ 00:11:36.580 START TEST nvmf_filesystem_no_in_capsule 00:11:36.580 ************************************ 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=745136 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 745136 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 745136 ']' 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 22:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.580 [2024-07-24 22:59:54.343870] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:36.580 [2024-07-24 22:59:54.343927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.840 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.840 [2024-07-24 22:59:54.422782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.840 [2024-07-24 22:59:54.497685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.840 [2024-07-24 22:59:54.497723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.840 [2024-07-24 22:59:54.497731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.840 [2024-07-24 22:59:54.497737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.840 [2024-07-24 22:59:54.497742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.840 [2024-07-24 22:59:54.497891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.840 [2024-07-24 22:59:54.498004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.840 [2024-07-24 22:59:54.498163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.840 [2024-07-24 22:59:54.498164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.411 [2024-07-24 22:59:55.171703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.411 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 Malloc1 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 [2024-07-24 22:59:55.305847] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.672 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:37.672 { 00:11:37.672 "name": "Malloc1", 00:11:37.673 "aliases": [ 00:11:37.673 "f194cbe6-f15b-4e49-ba3f-c6542cf67347" 00:11:37.673 ], 00:11:37.673 "product_name": "Malloc disk", 00:11:37.673 "block_size": 512, 00:11:37.673 "num_blocks": 1048576, 00:11:37.673 "uuid": "f194cbe6-f15b-4e49-ba3f-c6542cf67347", 00:11:37.673 "assigned_rate_limits": { 00:11:37.673 "rw_ios_per_sec": 0, 00:11:37.673 "rw_mbytes_per_sec": 0, 00:11:37.673 "r_mbytes_per_sec": 0, 00:11:37.673 "w_mbytes_per_sec": 0 00:11:37.673 }, 00:11:37.673 "claimed": true, 00:11:37.673 "claim_type": "exclusive_write", 00:11:37.673 "zoned": false, 00:11:37.673 "supported_io_types": { 00:11:37.673 "read": true, 00:11:37.673 "write": true, 00:11:37.673 "unmap": true, 00:11:37.673 "flush": true, 00:11:37.673 "reset": true, 00:11:37.673 "nvme_admin": false, 00:11:37.673 "nvme_io": false, 00:11:37.673 "nvme_io_md": false, 00:11:37.673 "write_zeroes": true, 00:11:37.673 "zcopy": true, 00:11:37.673 "get_zone_info": false, 00:11:37.673 "zone_management": false, 00:11:37.673 "zone_append": false, 00:11:37.673 "compare": false, 00:11:37.673 "compare_and_write": false, 00:11:37.673 "abort": true, 00:11:37.673 "seek_hole": false, 00:11:37.673 "seek_data": false, 00:11:37.673 "copy": true, 00:11:37.673 "nvme_iov_md": false 00:11:37.673 }, 00:11:37.673 "memory_domains": [ 00:11:37.673 { 00:11:37.673 "dma_device_id": "system", 00:11:37.673 "dma_device_type": 1 00:11:37.673 }, 00:11:37.673 { 00:11:37.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.673 "dma_device_type": 2 00:11:37.673 } 00:11:37.673 ], 00:11:37.673 "driver_specific": {} 00:11:37.673 } 00:11:37.673 ]' 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.673 22:59:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.585 22:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.585 22:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.585 22:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.585 22:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:39.585 22:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:41.495 22:59:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:41.495 22:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.755 22:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 ************************************ 00:11:43.140 START TEST filesystem_ext4 00:11:43.140 ************************************ 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:43.140 23:00:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.140 mke2fs 1.46.5 (30-Dec-2021) 00:11:43.140 Discarding device blocks: 0/522240 done 00:11:43.140 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:43.140 Filesystem UUID: 07db708c-55d2-4721-a4ad-ee3692df1f9c 00:11:43.140 Superblock backups stored on blocks: 00:11:43.140 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:43.140 00:11:43.140 Allocating group tables: 0/64 done 00:11:43.140 Writing inode tables: 0/64 done 00:11:45.948 Creating journal (8192 blocks): done 00:11:46.778 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:46.778 00:11:46.778 23:00:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:46.778 23:00:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 745136 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.778 00:11:47.778 real 0m4.960s 00:11:47.778 user 0m0.029s 00:11:47.778 sys 0m0.051s 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.778 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:47.778 ************************************ 00:11:47.778 END TEST filesystem_ext4 00:11:47.778 ************************************ 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.056 ************************************ 00:11:48.056 START TEST filesystem_btrfs 00:11:48.056 ************************************ 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.056 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.057 btrfs-progs v6.6.2 00:11:48.057 See https://btrfs.readthedocs.io for more information. 00:11:48.057 00:11:48.057 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.057 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.057 this does not affect your deployments: 00:11:48.057 - DUP for metadata (-m dup) 00:11:48.057 - enabled no-holes (-O no-holes) 00:11:48.057 - enabled free-space-tree (-R free-space-tree) 00:11:48.057 00:11:48.057 Label: (null) 00:11:48.057 UUID: feae4fce-1868-45fb-9b3d-1b1c43c6b6af 00:11:48.057 Node size: 16384 00:11:48.057 Sector size: 4096 00:11:48.057 Filesystem size: 510.00MiB 00:11:48.057 Block group profiles: 00:11:48.057 Data: single 8.00MiB 00:11:48.057 Metadata: DUP 32.00MiB 00:11:48.057 System: DUP 8.00MiB 00:11:48.057 SSD detected: yes 00:11:48.057 Zoned device: no 00:11:48.057 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.057 Runtime features: free-space-tree 00:11:48.057 Checksum: crc32c 00:11:48.057 Number of devices: 1 00:11:48.057 Devices: 00:11:48.057 ID SIZE PATH 00:11:48.057 1 510.00MiB /dev/nvme0n1p1 00:11:48.057 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:48.057 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.317 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 745136 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.318 00:11:48.318 real 0m0.320s 00:11:48.318 user 0m0.018s 00:11:48.318 sys 0m0.065s 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.318 ************************************ 00:11:48.318 END TEST filesystem_btrfs 00:11:48.318 ************************************ 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.318 23:00:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.318 ************************************ 00:11:48.318 START TEST filesystem_xfs 00:11:48.318 ************************************ 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:48.318 23:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:48.318 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:48.318 = sectsz=512 attr=2, projid32bit=1 00:11:48.318 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:48.318 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:48.318 data = bsize=4096 blocks=130560, imaxpct=25 00:11:48.318 = sunit=0 swidth=0 blks 00:11:48.318 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:48.318 log =internal log bsize=4096 blocks=16384, version=2 00:11:48.318 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:48.318 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.701 Discarding blocks...Done. 00:11:49.701 23:00:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.701 23:00:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.614 23:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.614 00:11:51.614 real 0m3.086s 00:11:51.614 user 0m0.018s 00:11:51.614 sys 0m0.061s 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.614 ************************************ 00:11:51.614 END TEST filesystem_xfs 00:11:51.614 ************************************ 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 745136 ']' 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 745136' 00:11:51.614 killing process with pid 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 745136 00:11:51.614 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 745136 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:51.875 00:11:51.875 real 0m15.329s 00:11:51.875 user 1m0.407s 00:11:51.875 sys 0m1.138s 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.875 ************************************ 00:11:51.875 END TEST nvmf_filesystem_no_in_capsule 00:11:51.875 ************************************ 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.875 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 ************************************ 00:11:52.136 START TEST nvmf_filesystem_in_capsule 00:11:52.136 ************************************ 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=748789 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 748789 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 748789 ']' 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.136 23:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 [2024-07-24 23:00:09.736692] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:11:52.136 [2024-07-24 23:00:09.736738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.136 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.136 [2024-07-24 23:00:09.807978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.136 [2024-07-24 23:00:09.873820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.136 [2024-07-24 23:00:09.873855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.136 [2024-07-24 23:00:09.873863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.136 [2024-07-24 23:00:09.873869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.136 [2024-07-24 23:00:09.873875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.136 [2024-07-24 23:00:09.874028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.136 [2024-07-24 23:00:09.874140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.136 [2024-07-24 23:00:09.874293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.136 [2024-07-24 23:00:09.874295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 [2024-07-24 23:00:10.566713] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 [2024-07-24 23:00:10.692799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.078 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:53.078 { 00:11:53.078 "name": "Malloc1", 00:11:53.078 "aliases": [ 00:11:53.078 "96671f53-fd64-4b51-9ae5-aa81fca1d368" 00:11:53.078 ], 00:11:53.078 "product_name": "Malloc disk", 00:11:53.078 "block_size": 512, 00:11:53.078 "num_blocks": 1048576, 00:11:53.078 "uuid": "96671f53-fd64-4b51-9ae5-aa81fca1d368", 00:11:53.078 "assigned_rate_limits": { 00:11:53.078 "rw_ios_per_sec": 0, 00:11:53.078 "rw_mbytes_per_sec": 0, 00:11:53.078 "r_mbytes_per_sec": 0, 00:11:53.078 "w_mbytes_per_sec": 0 00:11:53.078 }, 00:11:53.078 "claimed": true, 00:11:53.078 "claim_type": "exclusive_write", 00:11:53.078 "zoned": false, 00:11:53.078 "supported_io_types": { 00:11:53.078 "read": true, 00:11:53.078 "write": true, 00:11:53.078 "unmap": true, 00:11:53.078 "flush": true, 00:11:53.078 "reset": true, 00:11:53.078 "nvme_admin": false, 00:11:53.079 "nvme_io": false, 00:11:53.079 "nvme_io_md": false, 00:11:53.079 "write_zeroes": true, 00:11:53.079 "zcopy": true, 00:11:53.079 "get_zone_info": false, 00:11:53.079 "zone_management": false, 00:11:53.079 "zone_append": false, 00:11:53.079 "compare": false, 00:11:53.079 "compare_and_write": false, 00:11:53.079 "abort": true, 00:11:53.079 "seek_hole": false, 00:11:53.079 "seek_data": false, 00:11:53.079 "copy": true, 00:11:53.079 "nvme_iov_md": false 00:11:53.079 }, 00:11:53.079 "memory_domains": [ 00:11:53.079 { 00:11:53.079 "dma_device_id": "system", 00:11:53.079 "dma_device_type": 1 00:11:53.079 }, 00:11:53.079 { 00:11:53.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.079 "dma_device_type": 2 00:11:53.079 } 00:11:53.079 ], 00:11:53.079 "driver_specific": {} 00:11:53.079 } 00:11:53.079 ]' 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.079 23:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.990 23:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.990 23:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.990 23:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.990 23:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.990 23:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.903 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.904 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.904 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:57.164 23:00:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.550 ************************************ 00:11:58.550 START TEST filesystem_in_capsule_ext4 00:11:58.550 ************************************ 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:58.550 23:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:58.550 mke2fs 1.46.5 (30-Dec-2021) 00:11:58.550 Discarding device blocks: 0/522240 done 00:11:58.550 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:58.550 Filesystem UUID: 04181d25-1d30-4ca4-9747-f59484343f78 00:11:58.550 Superblock backups stored on blocks: 00:11:58.550 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:58.550 00:11:58.550 Allocating group tables: 0/64 done 00:11:58.550 Writing inode tables: 0/64 done 00:11:58.811 Creating journal (8192 blocks): done 00:11:58.811 Writing superblocks and filesystem accounting information: 0/64 done 00:11:58.811 00:11:58.811 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:58.811 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 748789 00:11:59.071 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.072 00:11:59.072 real 0m0.757s 00:11:59.072 user 0m0.026s 00:11:59.072 sys 0m0.047s 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:59.072 ************************************ 00:11:59.072 END TEST filesystem_in_capsule_ext4 00:11:59.072 ************************************ 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.072 ************************************ 00:11:59.072 START TEST filesystem_in_capsule_btrfs 00:11:59.072 ************************************ 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:59.072 23:00:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:59.643 btrfs-progs v6.6.2 00:11:59.643 See https://btrfs.readthedocs.io for more information. 00:11:59.643 00:11:59.643 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:59.643 NOTE: several default settings have changed in version 5.15, please make sure 00:11:59.643 this does not affect your deployments: 00:11:59.643 - DUP for metadata (-m dup) 00:11:59.643 - enabled no-holes (-O no-holes) 00:11:59.643 - enabled free-space-tree (-R free-space-tree) 00:11:59.643 00:11:59.643 Label: (null) 00:11:59.643 UUID: 6e5ed49a-29a3-481f-bdc4-138b771aefb5 00:11:59.643 Node size: 16384 00:11:59.643 Sector size: 4096 00:11:59.643 Filesystem size: 510.00MiB 00:11:59.643 Block group profiles: 00:11:59.643 Data: single 8.00MiB 00:11:59.643 Metadata: DUP 32.00MiB 00:11:59.643 System: DUP 8.00MiB 00:11:59.643 SSD detected: yes 00:11:59.643 Zoned device: no 00:11:59.643 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:59.643 Runtime features: free-space-tree 00:11:59.643 Checksum: crc32c 00:11:59.643 Number of devices: 1 00:11:59.643 Devices: 00:11:59.643 ID SIZE PATH 00:11:59.643 1 510.00MiB /dev/nvme0n1p1 00:11:59.643 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:59.643 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 748789 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.904 00:11:59.904 real 0m0.638s 00:11:59.904 user 0m0.029s 00:11:59.904 sys 0m0.057s 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.904 ************************************ 00:11:59.904 END TEST filesystem_in_capsule_btrfs 00:11:59.904 ************************************ 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.904 ************************************ 00:11:59.904 START TEST filesystem_in_capsule_xfs 00:11:59.904 ************************************ 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.904 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:59.905 23:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:59.905 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:59.905 = sectsz=512 attr=2, projid32bit=1 00:11:59.905 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:59.905 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:59.905 data = bsize=4096 blocks=130560, imaxpct=25 00:11:59.905 = sunit=0 swidth=0 blks 00:11:59.905 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:59.905 log =internal log bsize=4096 blocks=16384, version=2 00:11:59.905 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:59.905 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:00.846 Discarding blocks...Done. 00:12:00.846 23:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:00.846 23:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 748789 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.396 00:12:03.396 real 0m3.415s 00:12:03.396 user 0m0.024s 00:12:03.396 sys 0m0.056s 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.396 ************************************ 00:12:03.396 END TEST filesystem_in_capsule_xfs 00:12:03.396 ************************************ 00:12:03.396 23:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:03.396 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:03.396 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 748789 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 748789 ']' 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 748789 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 748789 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.657 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 748789' 00:12:03.657 killing process with pid 748789 00:12:03.658 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 748789 00:12:03.658 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 748789 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:03.918 00:12:03.918 real 0m11.896s 00:12:03.918 user 0m46.889s 00:12:03.918 sys 0m1.036s 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.918 ************************************ 00:12:03.918 END TEST nvmf_filesystem_in_capsule 00:12:03.918 ************************************ 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.918 rmmod nvme_tcp 00:12:03.918 rmmod nvme_fabrics 00:12:03.918 rmmod nvme_keyring 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.918 23:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.466 00:12:06.466 real 0m38.257s 00:12:06.466 user 1m49.804s 00:12:06.466 sys 0m8.624s 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.466 ************************************ 00:12:06.466 END TEST nvmf_filesystem 00:12:06.466 ************************************ 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.466 ************************************ 00:12:06.466 START TEST nvmf_target_discovery 00:12:06.466 ************************************ 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.466 * Looking for test storage... 00:12:06.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:06.466 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.467 23:00:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.609 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:14.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:14.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:14.610 Found net devices under 0000:31:00.0: cvl_0_0 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:14.610 Found net devices under 0000:31:00.1: cvl_0_1 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:12:14.610 00:12:14.610 --- 10.0.0.2 ping statistics --- 00:12:14.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.610 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:12:14.610 00:12:14.610 --- 10.0.0.1 ping statistics --- 00:12:14.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.610 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=756146 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 756146 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 756146 ']' 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.610 23:00:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.610 [2024-07-24 23:00:32.049973] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:14.610 [2024-07-24 23:00:32.050037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.610 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.610 [2024-07-24 23:00:32.128635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.610 [2024-07-24 23:00:32.203826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.610 [2024-07-24 23:00:32.203862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.610 [2024-07-24 23:00:32.203870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.611 [2024-07-24 23:00:32.203876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.611 [2024-07-24 23:00:32.203881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.611 [2024-07-24 23:00:32.204027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.611 [2024-07-24 23:00:32.204141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.611 [2024-07-24 23:00:32.204301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.611 [2024-07-24 23:00:32.204301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 [2024-07-24 23:00:32.884745] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 Null1 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 [2024-07-24 23:00:32.945078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.182 Null2 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.182 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 Null3 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.442 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 Null4 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.443 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:12:15.703 00:12:15.703 Discovery Log Number of Records 6, Generation counter 6 00:12:15.703 =====Discovery Log Entry 0====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: current discovery subsystem 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4420 00:12:15.703 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: explicit discovery connections, duplicate discovery information 00:12:15.703 sectype: none 00:12:15.703 =====Discovery Log Entry 1====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: nvme subsystem 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4420 00:12:15.703 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: none 00:12:15.703 sectype: none 00:12:15.703 =====Discovery Log Entry 2====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: nvme subsystem 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4420 00:12:15.703 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: none 00:12:15.703 sectype: none 00:12:15.703 =====Discovery Log Entry 3====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: nvme subsystem 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4420 00:12:15.703 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: none 00:12:15.703 sectype: none 00:12:15.703 =====Discovery Log Entry 4====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: nvme subsystem 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4420 00:12:15.703 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: none 00:12:15.703 sectype: none 00:12:15.703 =====Discovery Log Entry 5====== 00:12:15.703 trtype: tcp 00:12:15.703 adrfam: ipv4 00:12:15.703 subtype: discovery subsystem referral 00:12:15.703 treq: not required 00:12:15.703 portid: 0 00:12:15.703 trsvcid: 4430 00:12:15.703 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.703 traddr: 10.0.0.2 00:12:15.703 eflags: none 00:12:15.703 sectype: none 00:12:15.703 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:15.703 Perform nvmf subsystem discovery via RPC 00:12:15.703 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:15.703 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.703 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.703 [ 00:12:15.703 { 00:12:15.703 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:15.703 "subtype": "Discovery", 00:12:15.703 "listen_addresses": [ 00:12:15.703 { 00:12:15.703 "trtype": "TCP", 00:12:15.703 "adrfam": "IPv4", 00:12:15.703 "traddr": "10.0.0.2", 00:12:15.703 "trsvcid": "4420" 00:12:15.703 } 00:12:15.703 ], 00:12:15.703 "allow_any_host": true, 00:12:15.703 "hosts": [] 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.703 "subtype": "NVMe", 00:12:15.703 "listen_addresses": [ 00:12:15.703 { 00:12:15.703 "trtype": "TCP", 00:12:15.703 "adrfam": "IPv4", 00:12:15.703 "traddr": "10.0.0.2", 00:12:15.703 "trsvcid": "4420" 00:12:15.703 } 00:12:15.703 ], 00:12:15.703 "allow_any_host": true, 00:12:15.703 "hosts": [], 00:12:15.703 "serial_number": "SPDK00000000000001", 00:12:15.703 "model_number": "SPDK bdev Controller", 00:12:15.703 "max_namespaces": 32, 00:12:15.703 "min_cntlid": 1, 00:12:15.703 "max_cntlid": 65519, 00:12:15.703 "namespaces": [ 00:12:15.703 { 00:12:15.703 "nsid": 1, 00:12:15.703 "bdev_name": "Null1", 00:12:15.703 "name": "Null1", 00:12:15.703 "nguid": "DA95E564ED274D4CBA6C76584E6BCB4F", 00:12:15.703 "uuid": "da95e564-ed27-4d4c-ba6c-76584e6bcb4f" 00:12:15.703 } 00:12:15.703 ] 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:15.703 "subtype": "NVMe", 00:12:15.703 "listen_addresses": [ 00:12:15.703 { 00:12:15.703 "trtype": "TCP", 00:12:15.703 "adrfam": "IPv4", 00:12:15.703 "traddr": "10.0.0.2", 00:12:15.703 "trsvcid": "4420" 00:12:15.703 } 00:12:15.703 ], 00:12:15.703 "allow_any_host": true, 00:12:15.703 "hosts": [], 00:12:15.703 "serial_number": "SPDK00000000000002", 00:12:15.703 "model_number": "SPDK bdev Controller", 00:12:15.703 "max_namespaces": 32, 00:12:15.703 "min_cntlid": 1, 00:12:15.703 "max_cntlid": 65519, 00:12:15.703 "namespaces": [ 00:12:15.703 { 00:12:15.703 "nsid": 1, 00:12:15.703 "bdev_name": "Null2", 00:12:15.703 "name": "Null2", 00:12:15.703 "nguid": "C3143B97FA5C40CF9D1985596AC07BFD", 00:12:15.703 "uuid": "c3143b97-fa5c-40cf-9d19-85596ac07bfd" 00:12:15.703 } 00:12:15.703 ] 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:15.703 "subtype": "NVMe", 00:12:15.703 "listen_addresses": [ 00:12:15.703 { 00:12:15.703 "trtype": "TCP", 00:12:15.703 "adrfam": "IPv4", 00:12:15.703 "traddr": "10.0.0.2", 00:12:15.703 "trsvcid": "4420" 00:12:15.703 } 00:12:15.703 ], 00:12:15.703 "allow_any_host": true, 00:12:15.703 "hosts": [], 00:12:15.703 "serial_number": "SPDK00000000000003", 00:12:15.703 "model_number": "SPDK bdev Controller", 00:12:15.703 "max_namespaces": 32, 00:12:15.703 "min_cntlid": 1, 00:12:15.703 "max_cntlid": 65519, 00:12:15.703 "namespaces": [ 00:12:15.703 { 00:12:15.703 "nsid": 1, 00:12:15.703 "bdev_name": "Null3", 00:12:15.703 "name": "Null3", 00:12:15.703 "nguid": "E1ECCC792D0E4A51A2CC529A9D7B68E3", 00:12:15.703 "uuid": "e1eccc79-2d0e-4a51-a2cc-529a9d7b68e3" 00:12:15.703 } 00:12:15.703 ] 00:12:15.703 }, 00:12:15.703 { 00:12:15.703 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:15.703 "subtype": "NVMe", 00:12:15.703 "listen_addresses": [ 00:12:15.703 { 00:12:15.703 "trtype": "TCP", 00:12:15.703 "adrfam": "IPv4", 00:12:15.703 "traddr": "10.0.0.2", 00:12:15.703 "trsvcid": "4420" 00:12:15.703 } 00:12:15.703 ], 00:12:15.703 "allow_any_host": true, 00:12:15.703 "hosts": [], 00:12:15.703 "serial_number": "SPDK00000000000004", 00:12:15.703 "model_number": "SPDK bdev Controller", 00:12:15.703 "max_namespaces": 32, 00:12:15.703 "min_cntlid": 1, 00:12:15.703 "max_cntlid": 65519, 00:12:15.703 "namespaces": [ 00:12:15.703 { 00:12:15.703 "nsid": 1, 00:12:15.703 "bdev_name": "Null4", 00:12:15.703 "name": "Null4", 00:12:15.703 "nguid": "EB96FDCB78E74EB496387658A6E88AE0", 00:12:15.704 "uuid": "eb96fdcb-78e7-4eb4-9638-7658a6e88ae0" 00:12:15.704 } 00:12:15.704 ] 00:12:15.704 } 00:12:15.704 ] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.704 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.704 rmmod nvme_tcp 00:12:15.964 rmmod nvme_fabrics 00:12:15.964 rmmod nvme_keyring 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 756146 ']' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 756146 ']' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 756146' 00:12:15.964 killing process with pid 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 756146 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.964 23:00:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.574 00:12:18.574 real 0m11.948s 00:12:18.574 user 0m8.509s 00:12:18.574 sys 0m6.246s 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.574 ************************************ 00:12:18.574 END TEST nvmf_target_discovery 00:12:18.574 ************************************ 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.574 ************************************ 00:12:18.574 START TEST nvmf_referrals 00:12:18.574 ************************************ 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.574 * Looking for test storage... 00:12:18.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.574 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.575 23:00:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.575 23:00:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.575 23:00:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.575 23:00:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.575 23:00:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:26.715 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:26.715 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:26.715 Found net devices under 0000:31:00.0: cvl_0_0 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:26.715 Found net devices under 0000:31:00.1: cvl_0_1 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.715 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:12:26.716 00:12:26.716 --- 10.0.0.2 ping statistics --- 00:12:26.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.716 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:12:26.716 00:12:26.716 --- 10.0.0.1 ping statistics --- 00:12:26.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.716 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.716 23:00:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=761079 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 761079 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 761079 ']' 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.716 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.716 [2024-07-24 23:00:44.077434] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:26.716 [2024-07-24 23:00:44.077502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.716 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.716 [2024-07-24 23:00:44.156934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.716 [2024-07-24 23:00:44.232100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.716 [2024-07-24 23:00:44.232139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.716 [2024-07-24 23:00:44.232147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.716 [2024-07-24 23:00:44.232153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.716 [2024-07-24 23:00:44.232159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.716 [2024-07-24 23:00:44.232238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.716 [2024-07-24 23:00:44.232350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.716 [2024-07-24 23:00:44.232516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.716 [2024-07-24 23:00:44.232517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 [2024-07-24 23:00:44.901623] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 [2024-07-24 23:00:44.917827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.288 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.548 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.549 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.809 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:28.069 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.330 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.330 23:00:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.330 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.591 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.852 rmmod nvme_tcp 00:12:28.852 rmmod nvme_fabrics 00:12:28.852 rmmod nvme_keyring 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 761079 ']' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 761079 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 761079 ']' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 761079 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.852 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 761079 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 761079' 00:12:29.113 killing process with pid 761079 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 761079 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 761079 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.113 23:00:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.671 00:12:31.671 real 0m13.004s 00:12:31.671 user 0m13.386s 00:12:31.671 sys 0m6.489s 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.671 ************************************ 00:12:31.671 END TEST nvmf_referrals 00:12:31.671 ************************************ 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.671 ************************************ 00:12:31.671 START TEST nvmf_connect_disconnect 00:12:31.671 ************************************ 00:12:31.671 23:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.671 * Looking for test storage... 00:12:31.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.671 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.672 23:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:39.814 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:39.814 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:39.814 Found net devices under 0000:31:00.0: cvl_0_0 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:39.814 Found net devices under 0000:31:00.1: cvl_0_1 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.814 23:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:12:39.814 00:12:39.814 --- 10.0.0.2 ping statistics --- 00:12:39.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.814 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:12:39.814 00:12:39.814 --- 10.0.0.1 ping statistics --- 00:12:39.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.814 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=766344 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 766344 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 766344 ']' 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.814 23:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.814 [2024-07-24 23:00:57.408381] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:12:39.814 [2024-07-24 23:00:57.408437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.814 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.814 [2024-07-24 23:00:57.485993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.814 [2024-07-24 23:00:57.559148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.814 [2024-07-24 23:00:57.559184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.814 [2024-07-24 23:00:57.559191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.815 [2024-07-24 23:00:57.559197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.815 [2024-07-24 23:00:57.559207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.815 [2024-07-24 23:00:57.559342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.815 [2024-07-24 23:00:57.559462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.815 [2024-07-24 23:00:57.559598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.815 [2024-07-24 23:00:57.559599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.756 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.756 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:40.756 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.756 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 [2024-07-24 23:00:58.239682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 [2024-07-24 23:00:58.298977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:40.757 23:00:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:44.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.163 rmmod nvme_tcp 00:12:59.163 rmmod nvme_fabrics 00:12:59.163 rmmod nvme_keyring 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 766344 ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 766344 ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 766344' 00:12:59.163 killing process with pid 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 766344 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.163 23:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.077 00:13:01.077 real 0m29.673s 00:13:01.077 user 1m18.039s 00:13:01.077 sys 0m7.135s 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.077 ************************************ 00:13:01.077 END TEST nvmf_connect_disconnect 00:13:01.077 ************************************ 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.077 ************************************ 00:13:01.077 START TEST nvmf_multitarget 00:13:01.077 ************************************ 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.077 * Looking for test storage... 00:13:01.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:01.077 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.078 23:01:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:09.220 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:09.220 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:09.220 Found net devices under 0000:31:00.0: cvl_0_0 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:09.220 Found net devices under 0000:31:00.1: cvl_0_1 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:13:09.220 00:13:09.220 --- 10.0.0.2 ping statistics --- 00:13:09.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.220 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:13:09.220 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:13:09.221 00:13:09.221 --- 10.0.0.1 ping statistics --- 00:13:09.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.221 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=774810 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 774810 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 774810 ']' 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.221 23:01:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.481 [2024-07-24 23:01:27.018896] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:09.481 [2024-07-24 23:01:27.018962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.481 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.481 [2024-07-24 23:01:27.097978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.481 [2024-07-24 23:01:27.173130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.481 [2024-07-24 23:01:27.173167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.481 [2024-07-24 23:01:27.173175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.481 [2024-07-24 23:01:27.173181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.481 [2024-07-24 23:01:27.173187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.481 [2024-07-24 23:01:27.173333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.481 [2024-07-24 23:01:27.173448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.481 [2024-07-24 23:01:27.173606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.481 [2024-07-24 23:01:27.173607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.052 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:10.311 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.311 23:01:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.311 "nvmf_tgt_1" 00:13:10.311 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.572 "nvmf_tgt_2" 00:13:10.572 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.572 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:10.572 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:10.572 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:10.572 true 00:13:10.572 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:10.831 true 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.831 rmmod nvme_tcp 00:13:10.831 rmmod nvme_fabrics 00:13:10.831 rmmod nvme_keyring 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 774810 ']' 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 774810 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 774810 ']' 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 774810 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.831 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774810 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774810' 00:13:11.091 killing process with pid 774810 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 774810 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 774810 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.091 23:01:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.633 00:13:13.633 real 0m12.138s 00:13:13.633 user 0m9.423s 00:13:13.633 sys 0m6.424s 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:13.633 ************************************ 00:13:13.633 END TEST nvmf_multitarget 00:13:13.633 ************************************ 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.633 ************************************ 00:13:13.633 START TEST nvmf_rpc 00:13:13.633 ************************************ 00:13:13.633 23:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.633 * Looking for test storage... 00:13:13.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.633 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.634 23:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:21.773 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:21.773 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.773 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:21.774 Found net devices under 0000:31:00.0: cvl_0_0 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:21.774 Found net devices under 0000:31:00.1: cvl_0_1 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:13:21.774 00:13:21.774 --- 10.0.0.2 ping statistics --- 00:13:21.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.774 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:13:21.774 00:13:21.774 --- 10.0.0.1 ping statistics --- 00:13:21.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.774 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=779852 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 779852 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 779852 ']' 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.774 23:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 [2024-07-24 23:01:39.485504] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:21.774 [2024-07-24 23:01:39.485567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.774 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.035 [2024-07-24 23:01:39.566948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.035 [2024-07-24 23:01:39.641431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.035 [2024-07-24 23:01:39.641472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.035 [2024-07-24 23:01:39.641480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.035 [2024-07-24 23:01:39.641486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.035 [2024-07-24 23:01:39.641492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.035 [2024-07-24 23:01:39.641641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.035 [2024-07-24 23:01:39.641763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.035 [2024-07-24 23:01:39.641900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.035 [2024-07-24 23:01:39.642073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:22.606 "tick_rate": 2400000000, 00:13:22.606 "poll_groups": [ 00:13:22.606 { 00:13:22.606 "name": "nvmf_tgt_poll_group_000", 00:13:22.606 "admin_qpairs": 0, 00:13:22.606 "io_qpairs": 0, 00:13:22.606 "current_admin_qpairs": 0, 00:13:22.606 "current_io_qpairs": 0, 00:13:22.606 "pending_bdev_io": 0, 00:13:22.606 "completed_nvme_io": 0, 00:13:22.606 "transports": [] 00:13:22.606 }, 00:13:22.606 { 00:13:22.606 "name": "nvmf_tgt_poll_group_001", 00:13:22.606 "admin_qpairs": 0, 00:13:22.606 "io_qpairs": 0, 00:13:22.606 "current_admin_qpairs": 0, 00:13:22.606 "current_io_qpairs": 0, 00:13:22.606 "pending_bdev_io": 0, 00:13:22.606 "completed_nvme_io": 0, 00:13:22.606 "transports": [] 00:13:22.606 }, 00:13:22.606 { 00:13:22.606 "name": "nvmf_tgt_poll_group_002", 00:13:22.606 "admin_qpairs": 0, 00:13:22.606 "io_qpairs": 0, 00:13:22.606 "current_admin_qpairs": 0, 00:13:22.606 "current_io_qpairs": 0, 00:13:22.606 "pending_bdev_io": 0, 00:13:22.606 "completed_nvme_io": 0, 00:13:22.606 "transports": [] 00:13:22.606 }, 00:13:22.606 { 00:13:22.606 "name": "nvmf_tgt_poll_group_003", 00:13:22.606 "admin_qpairs": 0, 00:13:22.606 "io_qpairs": 0, 00:13:22.606 "current_admin_qpairs": 0, 00:13:22.606 "current_io_qpairs": 0, 00:13:22.606 "pending_bdev_io": 0, 00:13:22.606 "completed_nvme_io": 0, 00:13:22.606 "transports": [] 00:13:22.606 } 00:13:22.606 ] 00:13:22.606 }' 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:22.606 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.867 [2024-07-24 23:01:40.433062] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.867 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:22.867 "tick_rate": 2400000000, 00:13:22.867 "poll_groups": [ 00:13:22.867 { 00:13:22.867 "name": "nvmf_tgt_poll_group_000", 00:13:22.867 "admin_qpairs": 0, 00:13:22.867 "io_qpairs": 0, 00:13:22.867 "current_admin_qpairs": 0, 00:13:22.867 "current_io_qpairs": 0, 00:13:22.867 "pending_bdev_io": 0, 00:13:22.867 "completed_nvme_io": 0, 00:13:22.867 "transports": [ 00:13:22.867 { 00:13:22.867 "trtype": "TCP" 00:13:22.867 } 00:13:22.867 ] 00:13:22.867 }, 00:13:22.867 { 00:13:22.867 "name": "nvmf_tgt_poll_group_001", 00:13:22.867 "admin_qpairs": 0, 00:13:22.867 "io_qpairs": 0, 00:13:22.867 "current_admin_qpairs": 0, 00:13:22.867 "current_io_qpairs": 0, 00:13:22.867 "pending_bdev_io": 0, 00:13:22.867 "completed_nvme_io": 0, 00:13:22.867 "transports": [ 00:13:22.867 { 00:13:22.867 "trtype": "TCP" 00:13:22.867 } 00:13:22.867 ] 00:13:22.867 }, 00:13:22.867 { 00:13:22.867 "name": "nvmf_tgt_poll_group_002", 00:13:22.867 "admin_qpairs": 0, 00:13:22.867 "io_qpairs": 0, 00:13:22.867 "current_admin_qpairs": 0, 00:13:22.867 "current_io_qpairs": 0, 00:13:22.867 "pending_bdev_io": 0, 00:13:22.867 "completed_nvme_io": 0, 00:13:22.867 "transports": [ 00:13:22.867 { 00:13:22.867 "trtype": "TCP" 00:13:22.867 } 00:13:22.867 ] 00:13:22.867 }, 00:13:22.867 { 00:13:22.867 "name": "nvmf_tgt_poll_group_003", 00:13:22.867 "admin_qpairs": 0, 00:13:22.867 "io_qpairs": 0, 00:13:22.868 "current_admin_qpairs": 0, 00:13:22.868 "current_io_qpairs": 0, 00:13:22.868 "pending_bdev_io": 0, 00:13:22.868 "completed_nvme_io": 0, 00:13:22.868 "transports": [ 00:13:22.868 { 00:13:22.868 "trtype": "TCP" 00:13:22.868 } 00:13:22.868 ] 00:13:22.868 } 00:13:22.868 ] 00:13:22.868 }' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 Malloc1 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.868 [2024-07-24 23:01:40.620695] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:13:22.868 [2024-07-24 23:01:40.647478] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:13:22.868 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:22.868 could not add new controller: failed to write to nvme-fabrics device 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.868 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.129 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.129 23:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.512 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.512 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:24.512 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.512 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:24.512 23:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.421 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.681 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.682 [2024-07-24 23:01:44.253089] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:13:26.682 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:26.682 could not add new controller: failed to write to nvme-fabrics device 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.682 23:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.063 23:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.063 23:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:28.063 23:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.063 23:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:28.063 23:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:29.997 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.257 [2024-07-24 23:01:47.906943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.257 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.258 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.258 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.258 23:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.167 23:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.167 23:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.167 23:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.167 23:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:32.167 23:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.077 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 [2024-07-24 23:01:51.621881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.078 23:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.459 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.459 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.460 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.460 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:35.460 23:01:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:37.377 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:37.377 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:37.378 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.378 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:37.378 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.378 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:37.378 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 [2024-07-24 23:01:55.250921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 23:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.024 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.024 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.024 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.024 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:39.024 23:01:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 [2024-07-24 23:01:58.925802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.568 23:01:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.952 23:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.952 23:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.952 23:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.952 23:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.952 23:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 [2024-07-24 23:02:02.603374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.866 23:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.777 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.777 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.777 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.777 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.777 23:02:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 [2024-07-24 23:02:06.324725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 [2024-07-24 23:02:06.372800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 [2024-07-24 23:02:06.432991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.691 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 [2024-07-24 23:02:06.493193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.952 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 [2024-07-24 23:02:06.549358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:48.953 "tick_rate": 2400000000, 00:13:48.953 "poll_groups": [ 00:13:48.953 { 00:13:48.953 "name": "nvmf_tgt_poll_group_000", 00:13:48.953 "admin_qpairs": 0, 00:13:48.953 "io_qpairs": 224, 00:13:48.953 "current_admin_qpairs": 0, 00:13:48.953 "current_io_qpairs": 0, 00:13:48.953 "pending_bdev_io": 0, 00:13:48.953 "completed_nvme_io": 224, 00:13:48.953 "transports": [ 00:13:48.953 { 00:13:48.953 "trtype": "TCP" 00:13:48.953 } 00:13:48.953 ] 00:13:48.953 }, 00:13:48.953 { 00:13:48.953 "name": "nvmf_tgt_poll_group_001", 00:13:48.953 "admin_qpairs": 1, 00:13:48.953 "io_qpairs": 223, 00:13:48.953 "current_admin_qpairs": 0, 00:13:48.953 "current_io_qpairs": 0, 00:13:48.953 "pending_bdev_io": 0, 00:13:48.953 "completed_nvme_io": 326, 00:13:48.953 "transports": [ 00:13:48.953 { 00:13:48.953 "trtype": "TCP" 00:13:48.953 } 00:13:48.953 ] 00:13:48.953 }, 00:13:48.953 { 00:13:48.953 "name": "nvmf_tgt_poll_group_002", 00:13:48.953 "admin_qpairs": 6, 00:13:48.953 "io_qpairs": 218, 00:13:48.953 "current_admin_qpairs": 0, 00:13:48.953 "current_io_qpairs": 0, 00:13:48.953 "pending_bdev_io": 0, 00:13:48.953 "completed_nvme_io": 268, 00:13:48.953 "transports": [ 00:13:48.953 { 00:13:48.953 "trtype": "TCP" 00:13:48.953 } 00:13:48.953 ] 00:13:48.953 }, 00:13:48.953 { 00:13:48.953 "name": "nvmf_tgt_poll_group_003", 00:13:48.953 "admin_qpairs": 0, 00:13:48.953 "io_qpairs": 224, 00:13:48.953 "current_admin_qpairs": 0, 00:13:48.953 "current_io_qpairs": 0, 00:13:48.953 "pending_bdev_io": 0, 00:13:48.953 "completed_nvme_io": 421, 00:13:48.953 "transports": [ 00:13:48.953 { 00:13:48.953 "trtype": "TCP" 00:13:48.953 } 00:13:48.953 ] 00:13:48.953 } 00:13:48.953 ] 00:13:48.953 }' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.953 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.953 rmmod nvme_tcp 00:13:48.953 rmmod nvme_fabrics 00:13:49.214 rmmod nvme_keyring 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 779852 ']' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 779852 ']' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 779852' 00:13:49.214 killing process with pid 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 779852 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.214 23:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.763 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.763 00:13:51.763 real 0m38.108s 00:13:51.763 user 1m51.592s 00:13:51.763 sys 0m7.765s 00:13:51.763 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.763 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.763 ************************************ 00:13:51.763 END TEST nvmf_rpc 00:13:51.763 ************************************ 00:13:51.763 23:02:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.764 ************************************ 00:13:51.764 START TEST nvmf_invalid 00:13:51.764 ************************************ 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.764 * Looking for test storage... 00:13:51.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.764 23:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:59.912 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:59.913 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:59.913 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:59.913 Found net devices under 0000:31:00.0: cvl_0_0 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:59.913 Found net devices under 0000:31:00.1: cvl_0_1 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.913 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:13:59.913 00:13:59.914 --- 10.0.0.2 ping statistics --- 00:13:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.914 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:13:59.914 00:13:59.914 --- 10.0.0.1 ping statistics --- 00:13:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.914 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=790065 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 790065 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 790065 ']' 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.914 23:02:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.914 [2024-07-24 23:02:17.474721] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:13:59.914 [2024-07-24 23:02:17.474807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.914 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.914 [2024-07-24 23:02:17.554386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.914 [2024-07-24 23:02:17.628732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.914 [2024-07-24 23:02:17.628775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.914 [2024-07-24 23:02:17.628783] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.914 [2024-07-24 23:02:17.628789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.914 [2024-07-24 23:02:17.628795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.914 [2024-07-24 23:02:17.628879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.914 [2024-07-24 23:02:17.629013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.914 [2024-07-24 23:02:17.629171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.914 [2024-07-24 23:02:17.629172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:00.528 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13039 00:14:00.789 [2024-07-24 23:02:18.443052] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:00.789 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:00.789 { 00:14:00.789 "nqn": "nqn.2016-06.io.spdk:cnode13039", 00:14:00.789 "tgt_name": "foobar", 00:14:00.789 "method": "nvmf_create_subsystem", 00:14:00.789 "req_id": 1 00:14:00.789 } 00:14:00.789 Got JSON-RPC error response 00:14:00.789 response: 00:14:00.789 { 00:14:00.789 "code": -32603, 00:14:00.789 "message": "Unable to find target foobar" 00:14:00.789 }' 00:14:00.789 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:00.789 { 00:14:00.789 "nqn": "nqn.2016-06.io.spdk:cnode13039", 00:14:00.789 "tgt_name": "foobar", 00:14:00.789 "method": "nvmf_create_subsystem", 00:14:00.789 "req_id": 1 00:14:00.789 } 00:14:00.789 Got JSON-RPC error response 00:14:00.789 response: 00:14:00.789 { 00:14:00.789 "code": -32603, 00:14:00.789 "message": "Unable to find target foobar" 00:14:00.789 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:00.789 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:00.789 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13825 00:14:01.050 [2024-07-24 23:02:18.619615] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13825: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:01.050 { 00:14:01.050 "nqn": "nqn.2016-06.io.spdk:cnode13825", 00:14:01.050 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:01.050 "method": "nvmf_create_subsystem", 00:14:01.050 "req_id": 1 00:14:01.050 } 00:14:01.050 Got JSON-RPC error response 00:14:01.050 response: 00:14:01.050 { 00:14:01.050 "code": -32602, 00:14:01.050 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:01.050 }' 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:01.050 { 00:14:01.050 "nqn": "nqn.2016-06.io.spdk:cnode13825", 00:14:01.050 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:01.050 "method": "nvmf_create_subsystem", 00:14:01.050 "req_id": 1 00:14:01.050 } 00:14:01.050 Got JSON-RPC error response 00:14:01.050 response: 00:14:01.050 { 00:14:01.050 "code": -32602, 00:14:01.050 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:01.050 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29328 00:14:01.050 [2024-07-24 23:02:18.796197] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29328: invalid model number 'SPDK_Controller' 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:01.050 { 00:14:01.050 "nqn": "nqn.2016-06.io.spdk:cnode29328", 00:14:01.050 "model_number": "SPDK_Controller\u001f", 00:14:01.050 "method": "nvmf_create_subsystem", 00:14:01.050 "req_id": 1 00:14:01.050 } 00:14:01.050 Got JSON-RPC error response 00:14:01.050 response: 00:14:01.050 { 00:14:01.050 "code": -32602, 00:14:01.050 "message": "Invalid MN SPDK_Controller\u001f" 00:14:01.050 }' 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:01.050 { 00:14:01.050 "nqn": "nqn.2016-06.io.spdk:cnode29328", 00:14:01.050 "model_number": "SPDK_Controller\u001f", 00:14:01.050 "method": "nvmf_create_subsystem", 00:14:01.050 "req_id": 1 00:14:01.050 } 00:14:01.050 Got JSON-RPC error response 00:14:01.050 response: 00:14:01.050 { 00:14:01.050 "code": -32602, 00:14:01.050 "message": "Invalid MN SPDK_Controller\u001f" 00:14:01.050 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.050 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:01.313 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0HM.}OI7-nE~E~)~'\''7h@3' 00:14:01.314 23:02:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '0HM.}OI7-nE~E~)~'\''7h@3' nqn.2016-06.io.spdk:cnode15335 00:14:01.576 [2024-07-24 23:02:19.129229] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15335: invalid serial number '0HM.}OI7-nE~E~)~'7h@3' 00:14:01.576 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:01.576 { 00:14:01.576 "nqn": "nqn.2016-06.io.spdk:cnode15335", 00:14:01.576 "serial_number": "0HM.}OI7-nE~E~)~'\''7h@3", 00:14:01.576 "method": "nvmf_create_subsystem", 00:14:01.576 "req_id": 1 00:14:01.576 } 00:14:01.576 Got JSON-RPC error response 00:14:01.576 response: 00:14:01.576 { 00:14:01.576 "code": -32602, 00:14:01.576 "message": "Invalid SN 0HM.}OI7-nE~E~)~'\''7h@3" 00:14:01.576 }' 00:14:01.576 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:01.576 { 00:14:01.576 "nqn": "nqn.2016-06.io.spdk:cnode15335", 00:14:01.576 "serial_number": "0HM.}OI7-nE~E~)~'7h@3", 00:14:01.576 "method": "nvmf_create_subsystem", 00:14:01.576 "req_id": 1 00:14:01.576 } 00:14:01.576 Got JSON-RPC error response 00:14:01.576 response: 00:14:01.576 { 00:14:01.576 "code": -32602, 00:14:01.576 "message": "Invalid SN 0HM.}OI7-nE~E~)~'7h@3" 00:14:01.576 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.577 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.578 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.839 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'\''Hw' 00:14:01.840 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'\''Hw' nqn.2016-06.io.spdk:cnode15586 00:14:01.840 [2024-07-24 23:02:19.610755] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15586: invalid model number 'h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'Hw' 00:14:02.101 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:02.101 { 00:14:02.101 "nqn": "nqn.2016-06.io.spdk:cnode15586", 00:14:02.101 "model_number": "h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'\''Hw", 00:14:02.101 "method": "nvmf_create_subsystem", 00:14:02.101 "req_id": 1 00:14:02.101 } 00:14:02.101 Got JSON-RPC error response 00:14:02.101 response: 00:14:02.101 { 00:14:02.101 "code": -32602, 00:14:02.101 "message": "Invalid MN h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'\''Hw" 00:14:02.101 }' 00:14:02.101 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:02.101 { 00:14:02.101 "nqn": "nqn.2016-06.io.spdk:cnode15586", 00:14:02.101 "model_number": "h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'Hw", 00:14:02.101 "method": "nvmf_create_subsystem", 00:14:02.101 "req_id": 1 00:14:02.101 } 00:14:02.101 Got JSON-RPC error response 00:14:02.101 response: 00:14:02.101 { 00:14:02.101 "code": -32602, 00:14:02.101 "message": "Invalid MN h8_b$?=,U<$ zu&+cg(V?@I$hbkSIaherqdho7'Hw" 00:14:02.101 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:02.101 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:02.101 [2024-07-24 23:02:19.783398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.101 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:02.363 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:02.363 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:02.363 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:02.363 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:02.363 23:02:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:02.363 [2024-07-24 23:02:20.124511] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:02.624 { 00:14:02.624 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.624 "listen_address": { 00:14:02.624 "trtype": "tcp", 00:14:02.624 "traddr": "", 00:14:02.624 "trsvcid": "4421" 00:14:02.624 }, 00:14:02.624 "method": "nvmf_subsystem_remove_listener", 00:14:02.624 "req_id": 1 00:14:02.624 } 00:14:02.624 Got JSON-RPC error response 00:14:02.624 response: 00:14:02.624 { 00:14:02.624 "code": -32602, 00:14:02.624 "message": "Invalid parameters" 00:14:02.624 }' 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:02.624 { 00:14:02.624 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.624 "listen_address": { 00:14:02.624 "trtype": "tcp", 00:14:02.624 "traddr": "", 00:14:02.624 "trsvcid": "4421" 00:14:02.624 }, 00:14:02.624 "method": "nvmf_subsystem_remove_listener", 00:14:02.624 "req_id": 1 00:14:02.624 } 00:14:02.624 Got JSON-RPC error response 00:14:02.624 response: 00:14:02.624 { 00:14:02.624 "code": -32602, 00:14:02.624 "message": "Invalid parameters" 00:14:02.624 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17228 -i 0 00:14:02.624 [2024-07-24 23:02:20.301031] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17228: invalid cntlid range [0-65519] 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:02.624 { 00:14:02.624 "nqn": "nqn.2016-06.io.spdk:cnode17228", 00:14:02.624 "min_cntlid": 0, 00:14:02.624 "method": "nvmf_create_subsystem", 00:14:02.624 "req_id": 1 00:14:02.624 } 00:14:02.624 Got JSON-RPC error response 00:14:02.624 response: 00:14:02.624 { 00:14:02.624 "code": -32602, 00:14:02.624 "message": "Invalid cntlid range [0-65519]" 00:14:02.624 }' 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:02.624 { 00:14:02.624 "nqn": "nqn.2016-06.io.spdk:cnode17228", 00:14:02.624 "min_cntlid": 0, 00:14:02.624 "method": "nvmf_create_subsystem", 00:14:02.624 "req_id": 1 00:14:02.624 } 00:14:02.624 Got JSON-RPC error response 00:14:02.624 response: 00:14:02.624 { 00:14:02.624 "code": -32602, 00:14:02.624 "message": "Invalid cntlid range [0-65519]" 00:14:02.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.624 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16936 -i 65520 00:14:02.885 [2024-07-24 23:02:20.473572] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16936: invalid cntlid range [65520-65519] 00:14:02.885 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:02.885 { 00:14:02.885 "nqn": "nqn.2016-06.io.spdk:cnode16936", 00:14:02.885 "min_cntlid": 65520, 00:14:02.885 "method": "nvmf_create_subsystem", 00:14:02.885 "req_id": 1 00:14:02.885 } 00:14:02.885 Got JSON-RPC error response 00:14:02.885 response: 00:14:02.885 { 00:14:02.885 "code": -32602, 00:14:02.885 "message": "Invalid cntlid range [65520-65519]" 00:14:02.885 }' 00:14:02.885 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:02.885 { 00:14:02.885 "nqn": "nqn.2016-06.io.spdk:cnode16936", 00:14:02.885 "min_cntlid": 65520, 00:14:02.885 "method": "nvmf_create_subsystem", 00:14:02.885 "req_id": 1 00:14:02.885 } 00:14:02.885 Got JSON-RPC error response 00:14:02.885 response: 00:14:02.885 { 00:14:02.885 "code": -32602, 00:14:02.885 "message": "Invalid cntlid range [65520-65519]" 00:14:02.885 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.885 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19925 -I 0 00:14:02.885 [2024-07-24 23:02:20.646151] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19925: invalid cntlid range [1-0] 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:03.145 { 00:14:03.145 "nqn": "nqn.2016-06.io.spdk:cnode19925", 00:14:03.145 "max_cntlid": 0, 00:14:03.145 "method": "nvmf_create_subsystem", 00:14:03.145 "req_id": 1 00:14:03.145 } 00:14:03.145 Got JSON-RPC error response 00:14:03.145 response: 00:14:03.145 { 00:14:03.145 "code": -32602, 00:14:03.145 "message": "Invalid cntlid range [1-0]" 00:14:03.145 }' 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:03.145 { 00:14:03.145 "nqn": "nqn.2016-06.io.spdk:cnode19925", 00:14:03.145 "max_cntlid": 0, 00:14:03.145 "method": "nvmf_create_subsystem", 00:14:03.145 "req_id": 1 00:14:03.145 } 00:14:03.145 Got JSON-RPC error response 00:14:03.145 response: 00:14:03.145 { 00:14:03.145 "code": -32602, 00:14:03.145 "message": "Invalid cntlid range [1-0]" 00:14:03.145 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3444 -I 65520 00:14:03.145 [2024-07-24 23:02:20.810614] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3444: invalid cntlid range [1-65520] 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:03.145 { 00:14:03.145 "nqn": "nqn.2016-06.io.spdk:cnode3444", 00:14:03.145 "max_cntlid": 65520, 00:14:03.145 "method": "nvmf_create_subsystem", 00:14:03.145 "req_id": 1 00:14:03.145 } 00:14:03.145 Got JSON-RPC error response 00:14:03.145 response: 00:14:03.145 { 00:14:03.145 "code": -32602, 00:14:03.145 "message": "Invalid cntlid range [1-65520]" 00:14:03.145 }' 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:03.145 { 00:14:03.145 "nqn": "nqn.2016-06.io.spdk:cnode3444", 00:14:03.145 "max_cntlid": 65520, 00:14:03.145 "method": "nvmf_create_subsystem", 00:14:03.145 "req_id": 1 00:14:03.145 } 00:14:03.145 Got JSON-RPC error response 00:14:03.145 response: 00:14:03.145 { 00:14:03.145 "code": -32602, 00:14:03.145 "message": "Invalid cntlid range [1-65520]" 00:14:03.145 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:03.145 23:02:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28836 -i 6 -I 5 00:14:03.406 [2024-07-24 23:02:20.975126] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28836: invalid cntlid range [6-5] 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:03.406 { 00:14:03.406 "nqn": "nqn.2016-06.io.spdk:cnode28836", 00:14:03.406 "min_cntlid": 6, 00:14:03.406 "max_cntlid": 5, 00:14:03.406 "method": "nvmf_create_subsystem", 00:14:03.406 "req_id": 1 00:14:03.406 } 00:14:03.406 Got JSON-RPC error response 00:14:03.406 response: 00:14:03.406 { 00:14:03.406 "code": -32602, 00:14:03.406 "message": "Invalid cntlid range [6-5]" 00:14:03.406 }' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:03.406 { 00:14:03.406 "nqn": "nqn.2016-06.io.spdk:cnode28836", 00:14:03.406 "min_cntlid": 6, 00:14:03.406 "max_cntlid": 5, 00:14:03.406 "method": "nvmf_create_subsystem", 00:14:03.406 "req_id": 1 00:14:03.406 } 00:14:03.406 Got JSON-RPC error response 00:14:03.406 response: 00:14:03.406 { 00:14:03.406 "code": -32602, 00:14:03.406 "message": "Invalid cntlid range [6-5]" 00:14:03.406 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:03.406 { 00:14:03.406 "name": "foobar", 00:14:03.406 "method": "nvmf_delete_target", 00:14:03.406 "req_id": 1 00:14:03.406 } 00:14:03.406 Got JSON-RPC error response 00:14:03.406 response: 00:14:03.406 { 00:14:03.406 "code": -32602, 00:14:03.406 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:03.406 }' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:03.406 { 00:14:03.406 "name": "foobar", 00:14:03.406 "method": "nvmf_delete_target", 00:14:03.406 "req_id": 1 00:14:03.406 } 00:14:03.406 Got JSON-RPC error response 00:14:03.406 response: 00:14:03.406 { 00:14:03.406 "code": -32602, 00:14:03.406 "message": "The specified target doesn't exist, cannot delete it." 00:14:03.406 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.406 rmmod nvme_tcp 00:14:03.406 rmmod nvme_fabrics 00:14:03.406 rmmod nvme_keyring 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 790065 ']' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 790065 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 790065 ']' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 790065 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.406 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 790065 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 790065' 00:14:03.668 killing process with pid 790065 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 790065 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 790065 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.668 23:02:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.217 00:14:06.217 real 0m14.297s 00:14:06.217 user 0m19.302s 00:14:06.217 sys 0m6.955s 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:06.217 ************************************ 00:14:06.217 END TEST nvmf_invalid 00:14:06.217 ************************************ 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.217 ************************************ 00:14:06.217 START TEST nvmf_connect_stress 00:14:06.217 ************************************ 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.217 * Looking for test storage... 00:14:06.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.217 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.218 23:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:14.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:14.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.365 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:14.366 Found net devices under 0000:31:00.0: cvl_0_0 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:14.366 Found net devices under 0000:31:00.1: cvl_0_1 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:14:14.366 00:14:14.366 --- 10.0.0.2 ping statistics --- 00:14:14.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.366 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:14:14.366 00:14:14.366 --- 10.0.0.1 ping statistics --- 00:14:14.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.366 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=795591 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 795591 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 795591 ']' 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.366 23:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.366 [2024-07-24 23:02:31.699915] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:14.366 [2024-07-24 23:02:31.699975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.366 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.366 [2024-07-24 23:02:31.795505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.366 [2024-07-24 23:02:31.889872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.366 [2024-07-24 23:02:31.889921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.366 [2024-07-24 23:02:31.889930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.366 [2024-07-24 23:02:31.889937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.366 [2024-07-24 23:02:31.889943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.366 [2024-07-24 23:02:31.890072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.366 [2024-07-24 23:02:31.890238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.366 [2024-07-24 23:02:31.890238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 [2024-07-24 23:02:32.528605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 [2024-07-24 23:02:32.566673] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 NULL1 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=795931 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.939 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.940 23:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.512 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.512 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:15.512 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.512 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.512 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.772 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.772 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:15.772 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.772 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.772 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.033 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.034 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:16.034 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.034 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.034 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.294 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.294 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:16.294 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.294 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.294 23:02:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.555 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.555 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:16.555 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.555 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.555 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.124 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.124 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:17.124 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.124 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.124 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.385 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.385 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:17.385 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.385 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.385 23:02:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.646 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.646 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:17.646 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.646 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.646 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.907 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.907 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:17.907 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.907 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.907 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.169 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.169 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:18.169 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.169 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.169 23:02:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.740 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.740 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:18.740 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.740 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.740 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.001 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.001 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:19.001 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.001 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.001 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.261 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.261 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:19.261 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.261 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.261 23:02:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.522 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.522 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:19.522 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.522 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.522 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.783 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.783 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:19.783 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.783 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.784 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.356 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.356 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:20.356 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.356 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.356 23:02:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.617 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.617 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:20.617 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.617 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.617 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.878 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.878 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:20.878 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.878 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.878 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.139 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.139 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:21.139 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.139 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.139 23:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.711 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.711 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:21.711 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.711 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.711 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.971 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.971 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:21.971 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.971 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.971 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.231 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.231 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:22.231 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.231 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.231 23:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.491 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.491 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:22.491 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.492 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.492 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.752 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.752 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:22.752 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.752 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.752 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.324 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.324 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:23.324 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.324 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.324 23:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.584 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.585 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:23.585 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.585 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.585 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.845 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.845 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:23.845 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.845 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.845 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.105 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.105 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:24.105 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.105 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.105 23:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.365 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.365 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:24.365 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.365 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.365 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.938 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.938 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:24.938 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.938 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.938 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.239 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 795931 00:14:25.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (795931) - No such process 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 795931 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.240 rmmod nvme_tcp 00:14:25.240 rmmod nvme_fabrics 00:14:25.240 rmmod nvme_keyring 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 795591 ']' 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 795591 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 795591 ']' 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 795591 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795591 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795591' 00:14:25.240 killing process with pid 795591 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 795591 00:14:25.240 23:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 795591 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.505 23:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.412 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.412 00:14:27.412 real 0m21.607s 00:14:27.413 user 0m42.361s 00:14:27.413 sys 0m9.215s 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.413 ************************************ 00:14:27.413 END TEST nvmf_connect_stress 00:14:27.413 ************************************ 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.413 ************************************ 00:14:27.413 START TEST nvmf_fused_ordering 00:14:27.413 ************************************ 00:14:27.413 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.673 * Looking for test storage... 00:14:27.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.673 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.674 23:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:35.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:35.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:35.815 Found net devices under 0000:31:00.0: cvl_0_0 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.815 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:35.816 Found net devices under 0000:31:00.1: cvl_0_1 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:14:35.816 00:14:35.816 --- 10.0.0.2 ping statistics --- 00:14:35.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.816 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:14:35.816 00:14:35.816 --- 10.0.0.1 ping statistics --- 00:14:35.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.816 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=802584 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 802584 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 802584 ']' 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.816 23:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.816 [2024-07-24 23:02:53.514543] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:35.816 [2024-07-24 23:02:53.514611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.816 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.816 [2024-07-24 23:02:53.597902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.078 [2024-07-24 23:02:53.696519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.078 [2024-07-24 23:02:53.696597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.078 [2024-07-24 23:02:53.696605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.078 [2024-07-24 23:02:53.696612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.078 [2024-07-24 23:02:53.696618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.078 [2024-07-24 23:02:53.696643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.650 [2024-07-24 23:02:54.396301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.650 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 [2024-07-24 23:02:54.420525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.651 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 NULL1 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.912 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.913 23:02:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:36.913 [2024-07-24 23:02:54.490488] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:36.913 [2024-07-24 23:02:54.490530] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802682 ] 00:14:36.913 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.174 Attached to nqn.2016-06.io.spdk:cnode1 00:14:37.174 Namespace ID: 1 size: 1GB 00:14:37.174 fused_ordering(0) 00:14:37.174 fused_ordering(1) 00:14:37.174 fused_ordering(2) 00:14:37.174 fused_ordering(3) 00:14:37.174 fused_ordering(4) 00:14:37.174 fused_ordering(5) 00:14:37.174 fused_ordering(6) 00:14:37.174 fused_ordering(7) 00:14:37.174 fused_ordering(8) 00:14:37.174 fused_ordering(9) 00:14:37.174 fused_ordering(10) 00:14:37.174 fused_ordering(11) 00:14:37.174 fused_ordering(12) 00:14:37.174 fused_ordering(13) 00:14:37.174 fused_ordering(14) 00:14:37.174 fused_ordering(15) 00:14:37.174 fused_ordering(16) 00:14:37.174 fused_ordering(17) 00:14:37.174 fused_ordering(18) 00:14:37.174 fused_ordering(19) 00:14:37.174 fused_ordering(20) 00:14:37.174 fused_ordering(21) 00:14:37.174 fused_ordering(22) 00:14:37.174 fused_ordering(23) 00:14:37.174 fused_ordering(24) 00:14:37.174 fused_ordering(25) 00:14:37.174 fused_ordering(26) 00:14:37.174 fused_ordering(27) 00:14:37.174 fused_ordering(28) 00:14:37.174 fused_ordering(29) 00:14:37.174 fused_ordering(30) 00:14:37.174 fused_ordering(31) 00:14:37.174 fused_ordering(32) 00:14:37.174 fused_ordering(33) 00:14:37.174 fused_ordering(34) 00:14:37.174 fused_ordering(35) 00:14:37.174 fused_ordering(36) 00:14:37.174 fused_ordering(37) 00:14:37.174 fused_ordering(38) 00:14:37.174 fused_ordering(39) 00:14:37.174 fused_ordering(40) 00:14:37.174 fused_ordering(41) 00:14:37.174 fused_ordering(42) 00:14:37.174 fused_ordering(43) 00:14:37.174 fused_ordering(44) 00:14:37.174 fused_ordering(45) 00:14:37.174 fused_ordering(46) 00:14:37.174 fused_ordering(47) 00:14:37.174 fused_ordering(48) 00:14:37.174 fused_ordering(49) 00:14:37.174 fused_ordering(50) 00:14:37.174 fused_ordering(51) 00:14:37.174 fused_ordering(52) 00:14:37.174 fused_ordering(53) 00:14:37.174 fused_ordering(54) 00:14:37.174 fused_ordering(55) 00:14:37.174 fused_ordering(56) 00:14:37.174 fused_ordering(57) 00:14:37.174 fused_ordering(58) 00:14:37.174 fused_ordering(59) 00:14:37.174 fused_ordering(60) 00:14:37.174 fused_ordering(61) 00:14:37.174 fused_ordering(62) 00:14:37.174 fused_ordering(63) 00:14:37.174 fused_ordering(64) 00:14:37.174 fused_ordering(65) 00:14:37.174 fused_ordering(66) 00:14:37.174 fused_ordering(67) 00:14:37.174 fused_ordering(68) 00:14:37.174 fused_ordering(69) 00:14:37.174 fused_ordering(70) 00:14:37.174 fused_ordering(71) 00:14:37.174 fused_ordering(72) 00:14:37.174 fused_ordering(73) 00:14:37.174 fused_ordering(74) 00:14:37.174 fused_ordering(75) 00:14:37.174 fused_ordering(76) 00:14:37.174 fused_ordering(77) 00:14:37.174 fused_ordering(78) 00:14:37.174 fused_ordering(79) 00:14:37.174 fused_ordering(80) 00:14:37.174 fused_ordering(81) 00:14:37.174 fused_ordering(82) 00:14:37.174 fused_ordering(83) 00:14:37.174 fused_ordering(84) 00:14:37.174 fused_ordering(85) 00:14:37.174 fused_ordering(86) 00:14:37.174 fused_ordering(87) 00:14:37.174 fused_ordering(88) 00:14:37.174 fused_ordering(89) 00:14:37.174 fused_ordering(90) 00:14:37.174 fused_ordering(91) 00:14:37.174 fused_ordering(92) 00:14:37.174 fused_ordering(93) 00:14:37.174 fused_ordering(94) 00:14:37.174 fused_ordering(95) 00:14:37.174 fused_ordering(96) 00:14:37.174 fused_ordering(97) 00:14:37.174 fused_ordering(98) 00:14:37.174 fused_ordering(99) 00:14:37.174 fused_ordering(100) 00:14:37.174 fused_ordering(101) 00:14:37.174 fused_ordering(102) 00:14:37.174 fused_ordering(103) 00:14:37.174 fused_ordering(104) 00:14:37.174 fused_ordering(105) 00:14:37.174 fused_ordering(106) 00:14:37.174 fused_ordering(107) 00:14:37.174 fused_ordering(108) 00:14:37.174 fused_ordering(109) 00:14:37.174 fused_ordering(110) 00:14:37.174 fused_ordering(111) 00:14:37.174 fused_ordering(112) 00:14:37.174 fused_ordering(113) 00:14:37.174 fused_ordering(114) 00:14:37.174 fused_ordering(115) 00:14:37.174 fused_ordering(116) 00:14:37.174 fused_ordering(117) 00:14:37.174 fused_ordering(118) 00:14:37.174 fused_ordering(119) 00:14:37.174 fused_ordering(120) 00:14:37.174 fused_ordering(121) 00:14:37.174 fused_ordering(122) 00:14:37.174 fused_ordering(123) 00:14:37.174 fused_ordering(124) 00:14:37.174 fused_ordering(125) 00:14:37.174 fused_ordering(126) 00:14:37.174 fused_ordering(127) 00:14:37.174 fused_ordering(128) 00:14:37.174 fused_ordering(129) 00:14:37.174 fused_ordering(130) 00:14:37.174 fused_ordering(131) 00:14:37.174 fused_ordering(132) 00:14:37.174 fused_ordering(133) 00:14:37.174 fused_ordering(134) 00:14:37.174 fused_ordering(135) 00:14:37.174 fused_ordering(136) 00:14:37.174 fused_ordering(137) 00:14:37.174 fused_ordering(138) 00:14:37.174 fused_ordering(139) 00:14:37.174 fused_ordering(140) 00:14:37.174 fused_ordering(141) 00:14:37.174 fused_ordering(142) 00:14:37.174 fused_ordering(143) 00:14:37.174 fused_ordering(144) 00:14:37.174 fused_ordering(145) 00:14:37.174 fused_ordering(146) 00:14:37.175 fused_ordering(147) 00:14:37.175 fused_ordering(148) 00:14:37.175 fused_ordering(149) 00:14:37.175 fused_ordering(150) 00:14:37.175 fused_ordering(151) 00:14:37.175 fused_ordering(152) 00:14:37.175 fused_ordering(153) 00:14:37.175 fused_ordering(154) 00:14:37.175 fused_ordering(155) 00:14:37.175 fused_ordering(156) 00:14:37.175 fused_ordering(157) 00:14:37.175 fused_ordering(158) 00:14:37.175 fused_ordering(159) 00:14:37.175 fused_ordering(160) 00:14:37.175 fused_ordering(161) 00:14:37.175 fused_ordering(162) 00:14:37.175 fused_ordering(163) 00:14:37.175 fused_ordering(164) 00:14:37.175 fused_ordering(165) 00:14:37.175 fused_ordering(166) 00:14:37.175 fused_ordering(167) 00:14:37.175 fused_ordering(168) 00:14:37.175 fused_ordering(169) 00:14:37.175 fused_ordering(170) 00:14:37.175 fused_ordering(171) 00:14:37.175 fused_ordering(172) 00:14:37.175 fused_ordering(173) 00:14:37.175 fused_ordering(174) 00:14:37.175 fused_ordering(175) 00:14:37.175 fused_ordering(176) 00:14:37.175 fused_ordering(177) 00:14:37.175 fused_ordering(178) 00:14:37.175 fused_ordering(179) 00:14:37.175 fused_ordering(180) 00:14:37.175 fused_ordering(181) 00:14:37.175 fused_ordering(182) 00:14:37.175 fused_ordering(183) 00:14:37.175 fused_ordering(184) 00:14:37.175 fused_ordering(185) 00:14:37.175 fused_ordering(186) 00:14:37.175 fused_ordering(187) 00:14:37.175 fused_ordering(188) 00:14:37.175 fused_ordering(189) 00:14:37.175 fused_ordering(190) 00:14:37.175 fused_ordering(191) 00:14:37.175 fused_ordering(192) 00:14:37.175 fused_ordering(193) 00:14:37.175 fused_ordering(194) 00:14:37.175 fused_ordering(195) 00:14:37.175 fused_ordering(196) 00:14:37.175 fused_ordering(197) 00:14:37.175 fused_ordering(198) 00:14:37.175 fused_ordering(199) 00:14:37.175 fused_ordering(200) 00:14:37.175 fused_ordering(201) 00:14:37.175 fused_ordering(202) 00:14:37.175 fused_ordering(203) 00:14:37.175 fused_ordering(204) 00:14:37.175 fused_ordering(205) 00:14:37.747 fused_ordering(206) 00:14:37.747 fused_ordering(207) 00:14:37.747 fused_ordering(208) 00:14:37.747 fused_ordering(209) 00:14:37.747 fused_ordering(210) 00:14:37.747 fused_ordering(211) 00:14:37.747 fused_ordering(212) 00:14:37.747 fused_ordering(213) 00:14:37.747 fused_ordering(214) 00:14:37.747 fused_ordering(215) 00:14:37.747 fused_ordering(216) 00:14:37.747 fused_ordering(217) 00:14:37.747 fused_ordering(218) 00:14:37.747 fused_ordering(219) 00:14:37.747 fused_ordering(220) 00:14:37.747 fused_ordering(221) 00:14:37.747 fused_ordering(222) 00:14:37.747 fused_ordering(223) 00:14:37.747 fused_ordering(224) 00:14:37.747 fused_ordering(225) 00:14:37.747 fused_ordering(226) 00:14:37.747 fused_ordering(227) 00:14:37.747 fused_ordering(228) 00:14:37.747 fused_ordering(229) 00:14:37.747 fused_ordering(230) 00:14:37.747 fused_ordering(231) 00:14:37.747 fused_ordering(232) 00:14:37.747 fused_ordering(233) 00:14:37.747 fused_ordering(234) 00:14:37.747 fused_ordering(235) 00:14:37.747 fused_ordering(236) 00:14:37.747 fused_ordering(237) 00:14:37.747 fused_ordering(238) 00:14:37.747 fused_ordering(239) 00:14:37.747 fused_ordering(240) 00:14:37.747 fused_ordering(241) 00:14:37.747 fused_ordering(242) 00:14:37.747 fused_ordering(243) 00:14:37.747 fused_ordering(244) 00:14:37.747 fused_ordering(245) 00:14:37.747 fused_ordering(246) 00:14:37.747 fused_ordering(247) 00:14:37.747 fused_ordering(248) 00:14:37.747 fused_ordering(249) 00:14:37.747 fused_ordering(250) 00:14:37.747 fused_ordering(251) 00:14:37.747 fused_ordering(252) 00:14:37.747 fused_ordering(253) 00:14:37.747 fused_ordering(254) 00:14:37.747 fused_ordering(255) 00:14:37.747 fused_ordering(256) 00:14:37.747 fused_ordering(257) 00:14:37.747 fused_ordering(258) 00:14:37.747 fused_ordering(259) 00:14:37.747 fused_ordering(260) 00:14:37.747 fused_ordering(261) 00:14:37.747 fused_ordering(262) 00:14:37.747 fused_ordering(263) 00:14:37.747 fused_ordering(264) 00:14:37.747 fused_ordering(265) 00:14:37.747 fused_ordering(266) 00:14:37.747 fused_ordering(267) 00:14:37.747 fused_ordering(268) 00:14:37.747 fused_ordering(269) 00:14:37.747 fused_ordering(270) 00:14:37.747 fused_ordering(271) 00:14:37.747 fused_ordering(272) 00:14:37.747 fused_ordering(273) 00:14:37.747 fused_ordering(274) 00:14:37.747 fused_ordering(275) 00:14:37.747 fused_ordering(276) 00:14:37.747 fused_ordering(277) 00:14:37.747 fused_ordering(278) 00:14:37.747 fused_ordering(279) 00:14:37.747 fused_ordering(280) 00:14:37.747 fused_ordering(281) 00:14:37.747 fused_ordering(282) 00:14:37.747 fused_ordering(283) 00:14:37.747 fused_ordering(284) 00:14:37.747 fused_ordering(285) 00:14:37.747 fused_ordering(286) 00:14:37.747 fused_ordering(287) 00:14:37.747 fused_ordering(288) 00:14:37.747 fused_ordering(289) 00:14:37.747 fused_ordering(290) 00:14:37.747 fused_ordering(291) 00:14:37.747 fused_ordering(292) 00:14:37.747 fused_ordering(293) 00:14:37.747 fused_ordering(294) 00:14:37.747 fused_ordering(295) 00:14:37.747 fused_ordering(296) 00:14:37.747 fused_ordering(297) 00:14:37.747 fused_ordering(298) 00:14:37.747 fused_ordering(299) 00:14:37.747 fused_ordering(300) 00:14:37.747 fused_ordering(301) 00:14:37.747 fused_ordering(302) 00:14:37.747 fused_ordering(303) 00:14:37.747 fused_ordering(304) 00:14:37.747 fused_ordering(305) 00:14:37.747 fused_ordering(306) 00:14:37.747 fused_ordering(307) 00:14:37.747 fused_ordering(308) 00:14:37.747 fused_ordering(309) 00:14:37.747 fused_ordering(310) 00:14:37.747 fused_ordering(311) 00:14:37.747 fused_ordering(312) 00:14:37.747 fused_ordering(313) 00:14:37.747 fused_ordering(314) 00:14:37.747 fused_ordering(315) 00:14:37.747 fused_ordering(316) 00:14:37.747 fused_ordering(317) 00:14:37.747 fused_ordering(318) 00:14:37.747 fused_ordering(319) 00:14:37.747 fused_ordering(320) 00:14:37.747 fused_ordering(321) 00:14:37.747 fused_ordering(322) 00:14:37.747 fused_ordering(323) 00:14:37.747 fused_ordering(324) 00:14:37.747 fused_ordering(325) 00:14:37.748 fused_ordering(326) 00:14:37.748 fused_ordering(327) 00:14:37.748 fused_ordering(328) 00:14:37.748 fused_ordering(329) 00:14:37.748 fused_ordering(330) 00:14:37.748 fused_ordering(331) 00:14:37.748 fused_ordering(332) 00:14:37.748 fused_ordering(333) 00:14:37.748 fused_ordering(334) 00:14:37.748 fused_ordering(335) 00:14:37.748 fused_ordering(336) 00:14:37.748 fused_ordering(337) 00:14:37.748 fused_ordering(338) 00:14:37.748 fused_ordering(339) 00:14:37.748 fused_ordering(340) 00:14:37.748 fused_ordering(341) 00:14:37.748 fused_ordering(342) 00:14:37.748 fused_ordering(343) 00:14:37.748 fused_ordering(344) 00:14:37.748 fused_ordering(345) 00:14:37.748 fused_ordering(346) 00:14:37.748 fused_ordering(347) 00:14:37.748 fused_ordering(348) 00:14:37.748 fused_ordering(349) 00:14:37.748 fused_ordering(350) 00:14:37.748 fused_ordering(351) 00:14:37.748 fused_ordering(352) 00:14:37.748 fused_ordering(353) 00:14:37.748 fused_ordering(354) 00:14:37.748 fused_ordering(355) 00:14:37.748 fused_ordering(356) 00:14:37.748 fused_ordering(357) 00:14:37.748 fused_ordering(358) 00:14:37.748 fused_ordering(359) 00:14:37.748 fused_ordering(360) 00:14:37.748 fused_ordering(361) 00:14:37.748 fused_ordering(362) 00:14:37.748 fused_ordering(363) 00:14:37.748 fused_ordering(364) 00:14:37.748 fused_ordering(365) 00:14:37.748 fused_ordering(366) 00:14:37.748 fused_ordering(367) 00:14:37.748 fused_ordering(368) 00:14:37.748 fused_ordering(369) 00:14:37.748 fused_ordering(370) 00:14:37.748 fused_ordering(371) 00:14:37.748 fused_ordering(372) 00:14:37.748 fused_ordering(373) 00:14:37.748 fused_ordering(374) 00:14:37.748 fused_ordering(375) 00:14:37.748 fused_ordering(376) 00:14:37.748 fused_ordering(377) 00:14:37.748 fused_ordering(378) 00:14:37.748 fused_ordering(379) 00:14:37.748 fused_ordering(380) 00:14:37.748 fused_ordering(381) 00:14:37.748 fused_ordering(382) 00:14:37.748 fused_ordering(383) 00:14:37.748 fused_ordering(384) 00:14:37.748 fused_ordering(385) 00:14:37.748 fused_ordering(386) 00:14:37.748 fused_ordering(387) 00:14:37.748 fused_ordering(388) 00:14:37.748 fused_ordering(389) 00:14:37.748 fused_ordering(390) 00:14:37.748 fused_ordering(391) 00:14:37.748 fused_ordering(392) 00:14:37.748 fused_ordering(393) 00:14:37.748 fused_ordering(394) 00:14:37.748 fused_ordering(395) 00:14:37.748 fused_ordering(396) 00:14:37.748 fused_ordering(397) 00:14:37.748 fused_ordering(398) 00:14:37.748 fused_ordering(399) 00:14:37.748 fused_ordering(400) 00:14:37.748 fused_ordering(401) 00:14:37.748 fused_ordering(402) 00:14:37.748 fused_ordering(403) 00:14:37.748 fused_ordering(404) 00:14:37.748 fused_ordering(405) 00:14:37.748 fused_ordering(406) 00:14:37.748 fused_ordering(407) 00:14:37.748 fused_ordering(408) 00:14:37.748 fused_ordering(409) 00:14:37.748 fused_ordering(410) 00:14:38.009 fused_ordering(411) 00:14:38.009 fused_ordering(412) 00:14:38.009 fused_ordering(413) 00:14:38.009 fused_ordering(414) 00:14:38.009 fused_ordering(415) 00:14:38.009 fused_ordering(416) 00:14:38.009 fused_ordering(417) 00:14:38.009 fused_ordering(418) 00:14:38.009 fused_ordering(419) 00:14:38.009 fused_ordering(420) 00:14:38.009 fused_ordering(421) 00:14:38.009 fused_ordering(422) 00:14:38.009 fused_ordering(423) 00:14:38.009 fused_ordering(424) 00:14:38.009 fused_ordering(425) 00:14:38.009 fused_ordering(426) 00:14:38.009 fused_ordering(427) 00:14:38.009 fused_ordering(428) 00:14:38.009 fused_ordering(429) 00:14:38.009 fused_ordering(430) 00:14:38.009 fused_ordering(431) 00:14:38.009 fused_ordering(432) 00:14:38.009 fused_ordering(433) 00:14:38.009 fused_ordering(434) 00:14:38.009 fused_ordering(435) 00:14:38.009 fused_ordering(436) 00:14:38.009 fused_ordering(437) 00:14:38.009 fused_ordering(438) 00:14:38.009 fused_ordering(439) 00:14:38.009 fused_ordering(440) 00:14:38.009 fused_ordering(441) 00:14:38.009 fused_ordering(442) 00:14:38.009 fused_ordering(443) 00:14:38.009 fused_ordering(444) 00:14:38.009 fused_ordering(445) 00:14:38.009 fused_ordering(446) 00:14:38.009 fused_ordering(447) 00:14:38.009 fused_ordering(448) 00:14:38.009 fused_ordering(449) 00:14:38.009 fused_ordering(450) 00:14:38.009 fused_ordering(451) 00:14:38.009 fused_ordering(452) 00:14:38.009 fused_ordering(453) 00:14:38.009 fused_ordering(454) 00:14:38.009 fused_ordering(455) 00:14:38.009 fused_ordering(456) 00:14:38.009 fused_ordering(457) 00:14:38.009 fused_ordering(458) 00:14:38.009 fused_ordering(459) 00:14:38.009 fused_ordering(460) 00:14:38.009 fused_ordering(461) 00:14:38.009 fused_ordering(462) 00:14:38.009 fused_ordering(463) 00:14:38.009 fused_ordering(464) 00:14:38.009 fused_ordering(465) 00:14:38.009 fused_ordering(466) 00:14:38.009 fused_ordering(467) 00:14:38.009 fused_ordering(468) 00:14:38.009 fused_ordering(469) 00:14:38.009 fused_ordering(470) 00:14:38.009 fused_ordering(471) 00:14:38.009 fused_ordering(472) 00:14:38.009 fused_ordering(473) 00:14:38.009 fused_ordering(474) 00:14:38.009 fused_ordering(475) 00:14:38.009 fused_ordering(476) 00:14:38.009 fused_ordering(477) 00:14:38.009 fused_ordering(478) 00:14:38.009 fused_ordering(479) 00:14:38.009 fused_ordering(480) 00:14:38.009 fused_ordering(481) 00:14:38.009 fused_ordering(482) 00:14:38.009 fused_ordering(483) 00:14:38.009 fused_ordering(484) 00:14:38.009 fused_ordering(485) 00:14:38.009 fused_ordering(486) 00:14:38.009 fused_ordering(487) 00:14:38.009 fused_ordering(488) 00:14:38.009 fused_ordering(489) 00:14:38.009 fused_ordering(490) 00:14:38.009 fused_ordering(491) 00:14:38.009 fused_ordering(492) 00:14:38.009 fused_ordering(493) 00:14:38.009 fused_ordering(494) 00:14:38.009 fused_ordering(495) 00:14:38.009 fused_ordering(496) 00:14:38.009 fused_ordering(497) 00:14:38.009 fused_ordering(498) 00:14:38.009 fused_ordering(499) 00:14:38.009 fused_ordering(500) 00:14:38.009 fused_ordering(501) 00:14:38.009 fused_ordering(502) 00:14:38.009 fused_ordering(503) 00:14:38.009 fused_ordering(504) 00:14:38.009 fused_ordering(505) 00:14:38.009 fused_ordering(506) 00:14:38.009 fused_ordering(507) 00:14:38.009 fused_ordering(508) 00:14:38.009 fused_ordering(509) 00:14:38.009 fused_ordering(510) 00:14:38.009 fused_ordering(511) 00:14:38.009 fused_ordering(512) 00:14:38.009 fused_ordering(513) 00:14:38.009 fused_ordering(514) 00:14:38.009 fused_ordering(515) 00:14:38.009 fused_ordering(516) 00:14:38.009 fused_ordering(517) 00:14:38.009 fused_ordering(518) 00:14:38.009 fused_ordering(519) 00:14:38.009 fused_ordering(520) 00:14:38.009 fused_ordering(521) 00:14:38.009 fused_ordering(522) 00:14:38.009 fused_ordering(523) 00:14:38.009 fused_ordering(524) 00:14:38.009 fused_ordering(525) 00:14:38.009 fused_ordering(526) 00:14:38.009 fused_ordering(527) 00:14:38.009 fused_ordering(528) 00:14:38.009 fused_ordering(529) 00:14:38.009 fused_ordering(530) 00:14:38.009 fused_ordering(531) 00:14:38.009 fused_ordering(532) 00:14:38.009 fused_ordering(533) 00:14:38.009 fused_ordering(534) 00:14:38.009 fused_ordering(535) 00:14:38.009 fused_ordering(536) 00:14:38.009 fused_ordering(537) 00:14:38.009 fused_ordering(538) 00:14:38.009 fused_ordering(539) 00:14:38.009 fused_ordering(540) 00:14:38.009 fused_ordering(541) 00:14:38.009 fused_ordering(542) 00:14:38.009 fused_ordering(543) 00:14:38.009 fused_ordering(544) 00:14:38.009 fused_ordering(545) 00:14:38.009 fused_ordering(546) 00:14:38.009 fused_ordering(547) 00:14:38.009 fused_ordering(548) 00:14:38.009 fused_ordering(549) 00:14:38.009 fused_ordering(550) 00:14:38.009 fused_ordering(551) 00:14:38.009 fused_ordering(552) 00:14:38.009 fused_ordering(553) 00:14:38.009 fused_ordering(554) 00:14:38.009 fused_ordering(555) 00:14:38.009 fused_ordering(556) 00:14:38.009 fused_ordering(557) 00:14:38.009 fused_ordering(558) 00:14:38.009 fused_ordering(559) 00:14:38.009 fused_ordering(560) 00:14:38.009 fused_ordering(561) 00:14:38.009 fused_ordering(562) 00:14:38.009 fused_ordering(563) 00:14:38.009 fused_ordering(564) 00:14:38.009 fused_ordering(565) 00:14:38.009 fused_ordering(566) 00:14:38.009 fused_ordering(567) 00:14:38.009 fused_ordering(568) 00:14:38.009 fused_ordering(569) 00:14:38.009 fused_ordering(570) 00:14:38.009 fused_ordering(571) 00:14:38.009 fused_ordering(572) 00:14:38.009 fused_ordering(573) 00:14:38.009 fused_ordering(574) 00:14:38.009 fused_ordering(575) 00:14:38.009 fused_ordering(576) 00:14:38.009 fused_ordering(577) 00:14:38.009 fused_ordering(578) 00:14:38.010 fused_ordering(579) 00:14:38.010 fused_ordering(580) 00:14:38.010 fused_ordering(581) 00:14:38.010 fused_ordering(582) 00:14:38.010 fused_ordering(583) 00:14:38.010 fused_ordering(584) 00:14:38.010 fused_ordering(585) 00:14:38.010 fused_ordering(586) 00:14:38.010 fused_ordering(587) 00:14:38.010 fused_ordering(588) 00:14:38.010 fused_ordering(589) 00:14:38.010 fused_ordering(590) 00:14:38.010 fused_ordering(591) 00:14:38.010 fused_ordering(592) 00:14:38.010 fused_ordering(593) 00:14:38.010 fused_ordering(594) 00:14:38.010 fused_ordering(595) 00:14:38.010 fused_ordering(596) 00:14:38.010 fused_ordering(597) 00:14:38.010 fused_ordering(598) 00:14:38.010 fused_ordering(599) 00:14:38.010 fused_ordering(600) 00:14:38.010 fused_ordering(601) 00:14:38.010 fused_ordering(602) 00:14:38.010 fused_ordering(603) 00:14:38.010 fused_ordering(604) 00:14:38.010 fused_ordering(605) 00:14:38.010 fused_ordering(606) 00:14:38.010 fused_ordering(607) 00:14:38.010 fused_ordering(608) 00:14:38.010 fused_ordering(609) 00:14:38.010 fused_ordering(610) 00:14:38.010 fused_ordering(611) 00:14:38.010 fused_ordering(612) 00:14:38.010 fused_ordering(613) 00:14:38.010 fused_ordering(614) 00:14:38.010 fused_ordering(615) 00:14:38.582 fused_ordering(616) 00:14:38.582 fused_ordering(617) 00:14:38.582 fused_ordering(618) 00:14:38.582 fused_ordering(619) 00:14:38.582 fused_ordering(620) 00:14:38.582 fused_ordering(621) 00:14:38.582 fused_ordering(622) 00:14:38.582 fused_ordering(623) 00:14:38.582 fused_ordering(624) 00:14:38.582 fused_ordering(625) 00:14:38.582 fused_ordering(626) 00:14:38.582 fused_ordering(627) 00:14:38.582 fused_ordering(628) 00:14:38.582 fused_ordering(629) 00:14:38.582 fused_ordering(630) 00:14:38.582 fused_ordering(631) 00:14:38.582 fused_ordering(632) 00:14:38.582 fused_ordering(633) 00:14:38.582 fused_ordering(634) 00:14:38.582 fused_ordering(635) 00:14:38.582 fused_ordering(636) 00:14:38.582 fused_ordering(637) 00:14:38.582 fused_ordering(638) 00:14:38.582 fused_ordering(639) 00:14:38.582 fused_ordering(640) 00:14:38.582 fused_ordering(641) 00:14:38.582 fused_ordering(642) 00:14:38.582 fused_ordering(643) 00:14:38.582 fused_ordering(644) 00:14:38.582 fused_ordering(645) 00:14:38.582 fused_ordering(646) 00:14:38.582 fused_ordering(647) 00:14:38.582 fused_ordering(648) 00:14:38.582 fused_ordering(649) 00:14:38.582 fused_ordering(650) 00:14:38.582 fused_ordering(651) 00:14:38.582 fused_ordering(652) 00:14:38.582 fused_ordering(653) 00:14:38.582 fused_ordering(654) 00:14:38.582 fused_ordering(655) 00:14:38.582 fused_ordering(656) 00:14:38.582 fused_ordering(657) 00:14:38.582 fused_ordering(658) 00:14:38.582 fused_ordering(659) 00:14:38.582 fused_ordering(660) 00:14:38.582 fused_ordering(661) 00:14:38.582 fused_ordering(662) 00:14:38.582 fused_ordering(663) 00:14:38.582 fused_ordering(664) 00:14:38.582 fused_ordering(665) 00:14:38.582 fused_ordering(666) 00:14:38.582 fused_ordering(667) 00:14:38.582 fused_ordering(668) 00:14:38.582 fused_ordering(669) 00:14:38.582 fused_ordering(670) 00:14:38.582 fused_ordering(671) 00:14:38.582 fused_ordering(672) 00:14:38.582 fused_ordering(673) 00:14:38.582 fused_ordering(674) 00:14:38.582 fused_ordering(675) 00:14:38.582 fused_ordering(676) 00:14:38.582 fused_ordering(677) 00:14:38.582 fused_ordering(678) 00:14:38.582 fused_ordering(679) 00:14:38.582 fused_ordering(680) 00:14:38.582 fused_ordering(681) 00:14:38.582 fused_ordering(682) 00:14:38.582 fused_ordering(683) 00:14:38.582 fused_ordering(684) 00:14:38.582 fused_ordering(685) 00:14:38.582 fused_ordering(686) 00:14:38.582 fused_ordering(687) 00:14:38.582 fused_ordering(688) 00:14:38.582 fused_ordering(689) 00:14:38.582 fused_ordering(690) 00:14:38.582 fused_ordering(691) 00:14:38.582 fused_ordering(692) 00:14:38.582 fused_ordering(693) 00:14:38.582 fused_ordering(694) 00:14:38.582 fused_ordering(695) 00:14:38.582 fused_ordering(696) 00:14:38.582 fused_ordering(697) 00:14:38.582 fused_ordering(698) 00:14:38.582 fused_ordering(699) 00:14:38.582 fused_ordering(700) 00:14:38.582 fused_ordering(701) 00:14:38.582 fused_ordering(702) 00:14:38.582 fused_ordering(703) 00:14:38.582 fused_ordering(704) 00:14:38.582 fused_ordering(705) 00:14:38.582 fused_ordering(706) 00:14:38.582 fused_ordering(707) 00:14:38.582 fused_ordering(708) 00:14:38.582 fused_ordering(709) 00:14:38.582 fused_ordering(710) 00:14:38.582 fused_ordering(711) 00:14:38.582 fused_ordering(712) 00:14:38.582 fused_ordering(713) 00:14:38.582 fused_ordering(714) 00:14:38.582 fused_ordering(715) 00:14:38.582 fused_ordering(716) 00:14:38.582 fused_ordering(717) 00:14:38.582 fused_ordering(718) 00:14:38.582 fused_ordering(719) 00:14:38.582 fused_ordering(720) 00:14:38.582 fused_ordering(721) 00:14:38.582 fused_ordering(722) 00:14:38.582 fused_ordering(723) 00:14:38.582 fused_ordering(724) 00:14:38.582 fused_ordering(725) 00:14:38.582 fused_ordering(726) 00:14:38.582 fused_ordering(727) 00:14:38.582 fused_ordering(728) 00:14:38.582 fused_ordering(729) 00:14:38.582 fused_ordering(730) 00:14:38.582 fused_ordering(731) 00:14:38.582 fused_ordering(732) 00:14:38.582 fused_ordering(733) 00:14:38.582 fused_ordering(734) 00:14:38.582 fused_ordering(735) 00:14:38.582 fused_ordering(736) 00:14:38.582 fused_ordering(737) 00:14:38.582 fused_ordering(738) 00:14:38.582 fused_ordering(739) 00:14:38.582 fused_ordering(740) 00:14:38.582 fused_ordering(741) 00:14:38.582 fused_ordering(742) 00:14:38.582 fused_ordering(743) 00:14:38.582 fused_ordering(744) 00:14:38.582 fused_ordering(745) 00:14:38.582 fused_ordering(746) 00:14:38.582 fused_ordering(747) 00:14:38.582 fused_ordering(748) 00:14:38.582 fused_ordering(749) 00:14:38.582 fused_ordering(750) 00:14:38.582 fused_ordering(751) 00:14:38.582 fused_ordering(752) 00:14:38.582 fused_ordering(753) 00:14:38.582 fused_ordering(754) 00:14:38.582 fused_ordering(755) 00:14:38.582 fused_ordering(756) 00:14:38.582 fused_ordering(757) 00:14:38.582 fused_ordering(758) 00:14:38.582 fused_ordering(759) 00:14:38.582 fused_ordering(760) 00:14:38.582 fused_ordering(761) 00:14:38.582 fused_ordering(762) 00:14:38.582 fused_ordering(763) 00:14:38.582 fused_ordering(764) 00:14:38.582 fused_ordering(765) 00:14:38.582 fused_ordering(766) 00:14:38.582 fused_ordering(767) 00:14:38.582 fused_ordering(768) 00:14:38.582 fused_ordering(769) 00:14:38.582 fused_ordering(770) 00:14:38.582 fused_ordering(771) 00:14:38.582 fused_ordering(772) 00:14:38.582 fused_ordering(773) 00:14:38.582 fused_ordering(774) 00:14:38.582 fused_ordering(775) 00:14:38.582 fused_ordering(776) 00:14:38.582 fused_ordering(777) 00:14:38.582 fused_ordering(778) 00:14:38.582 fused_ordering(779) 00:14:38.582 fused_ordering(780) 00:14:38.582 fused_ordering(781) 00:14:38.582 fused_ordering(782) 00:14:38.582 fused_ordering(783) 00:14:38.582 fused_ordering(784) 00:14:38.582 fused_ordering(785) 00:14:38.582 fused_ordering(786) 00:14:38.582 fused_ordering(787) 00:14:38.582 fused_ordering(788) 00:14:38.582 fused_ordering(789) 00:14:38.582 fused_ordering(790) 00:14:38.582 fused_ordering(791) 00:14:38.582 fused_ordering(792) 00:14:38.582 fused_ordering(793) 00:14:38.582 fused_ordering(794) 00:14:38.582 fused_ordering(795) 00:14:38.582 fused_ordering(796) 00:14:38.582 fused_ordering(797) 00:14:38.582 fused_ordering(798) 00:14:38.582 fused_ordering(799) 00:14:38.582 fused_ordering(800) 00:14:38.582 fused_ordering(801) 00:14:38.582 fused_ordering(802) 00:14:38.582 fused_ordering(803) 00:14:38.582 fused_ordering(804) 00:14:38.582 fused_ordering(805) 00:14:38.582 fused_ordering(806) 00:14:38.582 fused_ordering(807) 00:14:38.582 fused_ordering(808) 00:14:38.582 fused_ordering(809) 00:14:38.582 fused_ordering(810) 00:14:38.582 fused_ordering(811) 00:14:38.582 fused_ordering(812) 00:14:38.582 fused_ordering(813) 00:14:38.582 fused_ordering(814) 00:14:38.582 fused_ordering(815) 00:14:38.582 fused_ordering(816) 00:14:38.582 fused_ordering(817) 00:14:38.582 fused_ordering(818) 00:14:38.582 fused_ordering(819) 00:14:38.582 fused_ordering(820) 00:14:39.526 fused_ordering(821) 00:14:39.526 fused_ordering(822) 00:14:39.526 fused_ordering(823) 00:14:39.526 fused_ordering(824) 00:14:39.526 fused_ordering(825) 00:14:39.526 fused_ordering(826) 00:14:39.526 fused_ordering(827) 00:14:39.526 fused_ordering(828) 00:14:39.526 fused_ordering(829) 00:14:39.526 fused_ordering(830) 00:14:39.526 fused_ordering(831) 00:14:39.526 fused_ordering(832) 00:14:39.526 fused_ordering(833) 00:14:39.526 fused_ordering(834) 00:14:39.526 fused_ordering(835) 00:14:39.526 fused_ordering(836) 00:14:39.526 fused_ordering(837) 00:14:39.526 fused_ordering(838) 00:14:39.526 fused_ordering(839) 00:14:39.526 fused_ordering(840) 00:14:39.526 fused_ordering(841) 00:14:39.526 fused_ordering(842) 00:14:39.526 fused_ordering(843) 00:14:39.526 fused_ordering(844) 00:14:39.526 fused_ordering(845) 00:14:39.526 fused_ordering(846) 00:14:39.526 fused_ordering(847) 00:14:39.526 fused_ordering(848) 00:14:39.526 fused_ordering(849) 00:14:39.526 fused_ordering(850) 00:14:39.526 fused_ordering(851) 00:14:39.526 fused_ordering(852) 00:14:39.526 fused_ordering(853) 00:14:39.526 fused_ordering(854) 00:14:39.526 fused_ordering(855) 00:14:39.526 fused_ordering(856) 00:14:39.526 fused_ordering(857) 00:14:39.526 fused_ordering(858) 00:14:39.526 fused_ordering(859) 00:14:39.526 fused_ordering(860) 00:14:39.526 fused_ordering(861) 00:14:39.526 fused_ordering(862) 00:14:39.526 fused_ordering(863) 00:14:39.526 fused_ordering(864) 00:14:39.526 fused_ordering(865) 00:14:39.526 fused_ordering(866) 00:14:39.526 fused_ordering(867) 00:14:39.526 fused_ordering(868) 00:14:39.526 fused_ordering(869) 00:14:39.526 fused_ordering(870) 00:14:39.526 fused_ordering(871) 00:14:39.526 fused_ordering(872) 00:14:39.526 fused_ordering(873) 00:14:39.526 fused_ordering(874) 00:14:39.526 fused_ordering(875) 00:14:39.526 fused_ordering(876) 00:14:39.526 fused_ordering(877) 00:14:39.526 fused_ordering(878) 00:14:39.526 fused_ordering(879) 00:14:39.526 fused_ordering(880) 00:14:39.526 fused_ordering(881) 00:14:39.526 fused_ordering(882) 00:14:39.526 fused_ordering(883) 00:14:39.526 fused_ordering(884) 00:14:39.526 fused_ordering(885) 00:14:39.526 fused_ordering(886) 00:14:39.526 fused_ordering(887) 00:14:39.526 fused_ordering(888) 00:14:39.526 fused_ordering(889) 00:14:39.526 fused_ordering(890) 00:14:39.526 fused_ordering(891) 00:14:39.526 fused_ordering(892) 00:14:39.526 fused_ordering(893) 00:14:39.526 fused_ordering(894) 00:14:39.526 fused_ordering(895) 00:14:39.526 fused_ordering(896) 00:14:39.526 fused_ordering(897) 00:14:39.526 fused_ordering(898) 00:14:39.526 fused_ordering(899) 00:14:39.526 fused_ordering(900) 00:14:39.526 fused_ordering(901) 00:14:39.526 fused_ordering(902) 00:14:39.526 fused_ordering(903) 00:14:39.526 fused_ordering(904) 00:14:39.526 fused_ordering(905) 00:14:39.526 fused_ordering(906) 00:14:39.526 fused_ordering(907) 00:14:39.526 fused_ordering(908) 00:14:39.526 fused_ordering(909) 00:14:39.526 fused_ordering(910) 00:14:39.526 fused_ordering(911) 00:14:39.526 fused_ordering(912) 00:14:39.526 fused_ordering(913) 00:14:39.526 fused_ordering(914) 00:14:39.526 fused_ordering(915) 00:14:39.526 fused_ordering(916) 00:14:39.526 fused_ordering(917) 00:14:39.526 fused_ordering(918) 00:14:39.526 fused_ordering(919) 00:14:39.526 fused_ordering(920) 00:14:39.526 fused_ordering(921) 00:14:39.526 fused_ordering(922) 00:14:39.526 fused_ordering(923) 00:14:39.526 fused_ordering(924) 00:14:39.526 fused_ordering(925) 00:14:39.526 fused_ordering(926) 00:14:39.526 fused_ordering(927) 00:14:39.526 fused_ordering(928) 00:14:39.526 fused_ordering(929) 00:14:39.526 fused_ordering(930) 00:14:39.526 fused_ordering(931) 00:14:39.526 fused_ordering(932) 00:14:39.526 fused_ordering(933) 00:14:39.526 fused_ordering(934) 00:14:39.526 fused_ordering(935) 00:14:39.526 fused_ordering(936) 00:14:39.526 fused_ordering(937) 00:14:39.526 fused_ordering(938) 00:14:39.526 fused_ordering(939) 00:14:39.526 fused_ordering(940) 00:14:39.526 fused_ordering(941) 00:14:39.526 fused_ordering(942) 00:14:39.526 fused_ordering(943) 00:14:39.526 fused_ordering(944) 00:14:39.526 fused_ordering(945) 00:14:39.526 fused_ordering(946) 00:14:39.526 fused_ordering(947) 00:14:39.526 fused_ordering(948) 00:14:39.526 fused_ordering(949) 00:14:39.526 fused_ordering(950) 00:14:39.526 fused_ordering(951) 00:14:39.526 fused_ordering(952) 00:14:39.526 fused_ordering(953) 00:14:39.526 fused_ordering(954) 00:14:39.526 fused_ordering(955) 00:14:39.526 fused_ordering(956) 00:14:39.526 fused_ordering(957) 00:14:39.526 fused_ordering(958) 00:14:39.526 fused_ordering(959) 00:14:39.526 fused_ordering(960) 00:14:39.526 fused_ordering(961) 00:14:39.526 fused_ordering(962) 00:14:39.526 fused_ordering(963) 00:14:39.526 fused_ordering(964) 00:14:39.526 fused_ordering(965) 00:14:39.526 fused_ordering(966) 00:14:39.526 fused_ordering(967) 00:14:39.526 fused_ordering(968) 00:14:39.526 fused_ordering(969) 00:14:39.526 fused_ordering(970) 00:14:39.526 fused_ordering(971) 00:14:39.526 fused_ordering(972) 00:14:39.526 fused_ordering(973) 00:14:39.526 fused_ordering(974) 00:14:39.526 fused_ordering(975) 00:14:39.526 fused_ordering(976) 00:14:39.526 fused_ordering(977) 00:14:39.526 fused_ordering(978) 00:14:39.526 fused_ordering(979) 00:14:39.526 fused_ordering(980) 00:14:39.526 fused_ordering(981) 00:14:39.526 fused_ordering(982) 00:14:39.526 fused_ordering(983) 00:14:39.526 fused_ordering(984) 00:14:39.526 fused_ordering(985) 00:14:39.526 fused_ordering(986) 00:14:39.526 fused_ordering(987) 00:14:39.526 fused_ordering(988) 00:14:39.526 fused_ordering(989) 00:14:39.526 fused_ordering(990) 00:14:39.526 fused_ordering(991) 00:14:39.526 fused_ordering(992) 00:14:39.526 fused_ordering(993) 00:14:39.526 fused_ordering(994) 00:14:39.526 fused_ordering(995) 00:14:39.526 fused_ordering(996) 00:14:39.526 fused_ordering(997) 00:14:39.526 fused_ordering(998) 00:14:39.526 fused_ordering(999) 00:14:39.526 fused_ordering(1000) 00:14:39.526 fused_ordering(1001) 00:14:39.526 fused_ordering(1002) 00:14:39.526 fused_ordering(1003) 00:14:39.526 fused_ordering(1004) 00:14:39.526 fused_ordering(1005) 00:14:39.526 fused_ordering(1006) 00:14:39.526 fused_ordering(1007) 00:14:39.526 fused_ordering(1008) 00:14:39.526 fused_ordering(1009) 00:14:39.526 fused_ordering(1010) 00:14:39.526 fused_ordering(1011) 00:14:39.526 fused_ordering(1012) 00:14:39.526 fused_ordering(1013) 00:14:39.526 fused_ordering(1014) 00:14:39.526 fused_ordering(1015) 00:14:39.526 fused_ordering(1016) 00:14:39.526 fused_ordering(1017) 00:14:39.526 fused_ordering(1018) 00:14:39.526 fused_ordering(1019) 00:14:39.526 fused_ordering(1020) 00:14:39.526 fused_ordering(1021) 00:14:39.526 fused_ordering(1022) 00:14:39.526 fused_ordering(1023) 00:14:39.526 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:39.526 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.527 rmmod nvme_tcp 00:14:39.527 rmmod nvme_fabrics 00:14:39.527 rmmod nvme_keyring 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 802584 ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 802584 ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 802584' 00:14:39.527 killing process with pid 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 802584 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.527 23:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:42.075 00:14:42.075 real 0m14.133s 00:14:42.075 user 0m7.389s 00:14:42.075 sys 0m7.680s 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:42.075 ************************************ 00:14:42.075 END TEST nvmf_fused_ordering 00:14:42.075 ************************************ 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.075 ************************************ 00:14:42.075 START TEST nvmf_ns_masking 00:14:42.075 ************************************ 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.075 * Looking for test storage... 00:14:42.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3004106b-57c7-4e5d-8fc3-7d858f4da7d5 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5c3b1646-5181-4c2b-911c-607fd5ad4140 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=43189c0d-8b30-4502-a986-d586323351e2 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:42.075 23:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.219 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:50.220 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:50.220 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:50.220 Found net devices under 0000:31:00.0: cvl_0_0 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:50.220 Found net devices under 0000:31:00.1: cvl_0_1 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:14:50.220 00:14:50.220 --- 10.0.0.2 ping statistics --- 00:14:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.220 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:14:50.220 00:14:50.220 --- 10.0.0.1 ping statistics --- 00:14:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.220 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=808116 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 808116 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.220 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 808116 ']' 00:14:50.221 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.221 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.221 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.221 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.221 23:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.221 [2024-07-24 23:03:07.736886] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:14:50.221 [2024-07-24 23:03:07.736946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.221 [2024-07-24 23:03:07.810042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.221 [2024-07-24 23:03:07.874459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.221 [2024-07-24 23:03:07.874496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.221 [2024-07-24 23:03:07.874503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.221 [2024-07-24 23:03:07.874509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.221 [2024-07-24 23:03:07.874515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.221 [2024-07-24 23:03:07.874532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.793 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.054 [2024-07-24 23:03:08.664903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.054 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:51.054 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:51.054 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.315 Malloc1 00:14:51.315 23:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:51.315 Malloc2 00:14:51.315 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:51.577 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:51.838 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.838 [2024-07-24 23:03:09.558458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.838 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:51.838 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43189c0d-8b30-4502-a986-d586323351e2 -a 10.0.0.2 -s 4420 -i 4 00:14:52.099 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.099 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:52.099 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.099 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:52.099 23:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:54.014 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:54.275 [ 0]:0x1 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1325b3f377aa4630bb2549532d941dba 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1325b3f377aa4630bb2549532d941dba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.275 23:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:54.543 [ 0]:0x1 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1325b3f377aa4630bb2549532d941dba 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1325b3f377aa4630bb2549532d941dba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:54.543 [ 1]:0x2 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:54.543 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.807 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.068 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:55.068 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:55.068 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43189c0d-8b30-4502-a986-d586323351e2 -a 10.0.0.2 -s 4420 -i 4 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:55.328 23:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:57.238 23:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:57.238 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:57.498 [ 0]:0x2 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.498 [ 0]:0x1 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.498 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1325b3f377aa4630bb2549532d941dba 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1325b3f377aa4630bb2549532d941dba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:57.792 [ 1]:0x2 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.792 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:58.077 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.078 [ 0]:0x2 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:58.078 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43189c0d-8b30-4502-a986-d586323351e2 -a 10.0.0.2 -s 4420 -i 4 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:58.338 23:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:00.248 23:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:00.248 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:00.248 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:00.248 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:00.248 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.248 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:00.508 [ 0]:0x1 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1325b3f377aa4630bb2549532d941dba 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1325b3f377aa4630bb2549532d941dba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:00.508 [ 1]:0x2 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.508 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.769 [ 0]:0x2 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:00.769 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:00.769 [2024-07-24 23:03:18.552166] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:01.030 request: 00:15:01.030 { 00:15:01.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.030 "nsid": 2, 00:15:01.030 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.030 "method": "nvmf_ns_remove_host", 00:15:01.030 "req_id": 1 00:15:01.030 } 00:15:01.030 Got JSON-RPC error response 00:15:01.030 response: 00:15:01.030 { 00:15:01.030 "code": -32602, 00:15:01.030 "message": "Invalid parameters" 00:15:01.030 } 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.030 [ 0]:0x2 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c2c404a1014692a62df7a68fe550bf 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c2c404a1014692a62df7a68fe550bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:01.030 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=810754 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 810754 /var/tmp/host.sock 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 810754 ']' 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:01.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.291 23:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.291 [2024-07-24 23:03:18.916606] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:01.291 [2024-07-24 23:03:18.916659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810754 ] 00:15:01.291 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.292 [2024-07-24 23:03:19.000435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.292 [2024-07-24 23:03:19.064125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3004106b-57c7-4e5d-8fc3-7d858f4da7d5 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:02.234 23:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3004106B57C74E5D8FC37D858F4DA7D5 -i 00:15:02.494 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5c3b1646-5181-4c2b-911c-607fd5ad4140 00:15:02.494 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:02.494 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5C3B164651814C2B911C607FD5AD4140 -i 00:15:02.754 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.754 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:03.015 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:03.015 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:03.276 nvme0n1 00:15:03.276 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:03.276 23:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:03.276 nvme1n2 00:15:03.276 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:03.276 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:03.276 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:03.276 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:03.276 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:03.536 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:03.536 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:03.536 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:03.536 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3004106b-57c7-4e5d-8fc3-7d858f4da7d5 == \3\0\0\4\1\0\6\b\-\5\7\c\7\-\4\e\5\d\-\8\f\c\3\-\7\d\8\5\8\f\4\d\a\7\d\5 ]] 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5c3b1646-5181-4c2b-911c-607fd5ad4140 == \5\c\3\b\1\6\4\6\-\5\1\8\1\-\4\c\2\b\-\9\1\1\c\-\6\0\7\f\d\5\a\d\4\1\4\0 ]] 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 810754 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 810754 ']' 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 810754 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.797 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810754 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810754' 00:15:04.058 killing process with pid 810754 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 810754 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 810754 00:15:04.058 23:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.318 rmmod nvme_tcp 00:15:04.318 rmmod nvme_fabrics 00:15:04.318 rmmod nvme_keyring 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 808116 ']' 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 808116 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 808116 ']' 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 808116 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.318 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 808116 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 808116' 00:15:04.579 killing process with pid 808116 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 808116 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 808116 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.579 23:03:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.124 00:15:07.124 real 0m24.958s 00:15:07.124 user 0m24.098s 00:15:07.124 sys 0m7.884s 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:07.124 ************************************ 00:15:07.124 END TEST nvmf_ns_masking 00:15:07.124 ************************************ 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.124 ************************************ 00:15:07.124 START TEST nvmf_nvme_cli 00:15:07.124 ************************************ 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.124 * Looking for test storage... 00:15:07.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.124 23:03:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:15.268 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:15.268 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:15.268 Found net devices under 0000:31:00.0: cvl_0_0 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:15.268 Found net devices under 0000:31:00.1: cvl_0_1 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.268 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:15:15.269 00:15:15.269 --- 10.0.0.2 ping statistics --- 00:15:15.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.269 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:15:15.269 00:15:15.269 --- 10.0.0.1 ping statistics --- 00:15:15.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.269 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=816128 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 816128 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 816128 ']' 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.269 23:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.269 [2024-07-24 23:03:32.659037] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:15.269 [2024-07-24 23:03:32.659101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.269 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.269 [2024-07-24 23:03:32.737608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.269 [2024-07-24 23:03:32.812698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.269 [2024-07-24 23:03:32.812739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.269 [2024-07-24 23:03:32.812749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.269 [2024-07-24 23:03:32.812763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.269 [2024-07-24 23:03:32.812768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.269 [2024-07-24 23:03:32.812849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.269 [2024-07-24 23:03:32.812984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.269 [2024-07-24 23:03:32.813130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.269 [2024-07-24 23:03:32.813131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 [2024-07-24 23:03:33.488669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 Malloc0 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 Malloc1 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 [2024-07-24 23:03:33.578361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.847 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:15:16.109 00:15:16.109 Discovery Log Number of Records 2, Generation counter 2 00:15:16.109 =====Discovery Log Entry 0====== 00:15:16.109 trtype: tcp 00:15:16.109 adrfam: ipv4 00:15:16.109 subtype: current discovery subsystem 00:15:16.109 treq: not required 00:15:16.109 portid: 0 00:15:16.109 trsvcid: 4420 00:15:16.109 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:16.109 traddr: 10.0.0.2 00:15:16.109 eflags: explicit discovery connections, duplicate discovery information 00:15:16.109 sectype: none 00:15:16.109 =====Discovery Log Entry 1====== 00:15:16.109 trtype: tcp 00:15:16.109 adrfam: ipv4 00:15:16.109 subtype: nvme subsystem 00:15:16.109 treq: not required 00:15:16.109 portid: 0 00:15:16.109 trsvcid: 4420 00:15:16.109 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:16.109 traddr: 10.0.0.2 00:15:16.109 eflags: none 00:15:16.109 sectype: none 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:16.109 23:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:18.021 23:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:19.933 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:19.934 /dev/nvme0n1 ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.934 rmmod nvme_tcp 00:15:19.934 rmmod nvme_fabrics 00:15:19.934 rmmod nvme_keyring 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 816128 ']' 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 816128 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 816128 ']' 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 816128 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 816128 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 816128' 00:15:19.934 killing process with pid 816128 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 816128 00:15:19.934 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 816128 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.194 23:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.106 00:15:22.106 real 0m15.416s 00:15:22.106 user 0m22.014s 00:15:22.106 sys 0m6.528s 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.106 ************************************ 00:15:22.106 END TEST nvmf_nvme_cli 00:15:22.106 ************************************ 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.106 23:03:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 ************************************ 00:15:22.367 START TEST nvmf_vfio_user 00:15:22.367 ************************************ 00:15:22.367 23:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:22.367 * Looking for test storage... 00:15:22.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.367 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=817609 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 817609' 00:15:22.368 Process pid: 817609 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 817609 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 817609 ']' 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.368 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:22.368 [2024-07-24 23:03:40.114857] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:22.368 [2024-07-24 23:03:40.114914] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.628 [2024-07-24 23:03:40.185994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.628 [2024-07-24 23:03:40.250890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.628 [2024-07-24 23:03:40.250926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.628 [2024-07-24 23:03:40.250933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.628 [2024-07-24 23:03:40.250940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.628 [2024-07-24 23:03:40.250945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.628 [2024-07-24 23:03:40.251081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.628 [2024-07-24 23:03:40.251098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.628 [2024-07-24 23:03:40.251219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.628 [2024-07-24 23:03:40.251220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.198 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.198 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:23.198 23:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:24.138 23:03:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:24.398 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:24.398 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:24.398 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:24.398 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:24.398 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:24.658 Malloc1 00:15:24.658 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:24.659 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:24.918 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:25.177 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.177 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:25.177 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:25.177 Malloc2 00:15:25.177 23:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:25.466 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:25.782 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:25.782 [2024-07-24 23:03:43.465705] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:25.782 [2024-07-24 23:03:43.465775] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818306 ] 00:15:25.782 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.782 [2024-07-24 23:03:43.499394] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:25.782 [2024-07-24 23:03:43.501691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:25.782 [2024-07-24 23:03:43.501710] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6a6bec9000 00:15:25.782 [2024-07-24 23:03:43.502687] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.503691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.504694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.505702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.506705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.509758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.510724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.511733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:25.782 [2024-07-24 23:03:43.512742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:25.782 [2024-07-24 23:03:43.512753] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6a6bebe000 00:15:25.782 [2024-07-24 23:03:43.514079] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:25.782 [2024-07-24 23:03:43.534998] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:25.782 [2024-07-24 23:03:43.535023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:25.782 [2024-07-24 23:03:43.537871] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:25.782 [2024-07-24 23:03:43.537915] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:25.782 [2024-07-24 23:03:43.538001] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:25.782 [2024-07-24 23:03:43.538018] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:25.782 [2024-07-24 23:03:43.538023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:25.782 [2024-07-24 23:03:43.538870] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:25.782 [2024-07-24 23:03:43.538881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:25.782 [2024-07-24 23:03:43.538888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:25.782 [2024-07-24 23:03:43.539876] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:25.782 [2024-07-24 23:03:43.539884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:25.782 [2024-07-24 23:03:43.539892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.540880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:25.782 [2024-07-24 23:03:43.540889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.541887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:25.782 [2024-07-24 23:03:43.541894] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:25.782 [2024-07-24 23:03:43.541899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.541905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.542010] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:25.782 [2024-07-24 23:03:43.542015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.542020] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:25.782 [2024-07-24 23:03:43.542896] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:25.782 [2024-07-24 23:03:43.543899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:25.782 [2024-07-24 23:03:43.544900] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:25.782 [2024-07-24 23:03:43.545906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.782 [2024-07-24 23:03:43.545973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:25.782 [2024-07-24 23:03:43.546913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:25.782 [2024-07-24 23:03:43.546920] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:25.782 [2024-07-24 23:03:43.546925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.546946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:25.782 [2024-07-24 23:03:43.546953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.546967] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:25.782 [2024-07-24 23:03:43.546972] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:25.782 [2024-07-24 23:03:43.546976] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.782 [2024-07-24 23:03:43.546989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547032] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:25.782 [2024-07-24 23:03:43.547039] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:25.782 [2024-07-24 23:03:43.547043] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:25.782 [2024-07-24 23:03:43.547048] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:25.782 [2024-07-24 23:03:43.547053] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:25.782 [2024-07-24 23:03:43.547057] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:25.782 [2024-07-24 23:03:43.547062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.782 [2024-07-24 23:03:43.547114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.782 [2024-07-24 23:03:43.547122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.782 [2024-07-24 23:03:43.547130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:25.782 [2024-07-24 23:03:43.547135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547167] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:25.782 [2024-07-24 23:03:43.547172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547279] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:25.782 [2024-07-24 23:03:43.547283] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:25.782 [2024-07-24 23:03:43.547287] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.782 [2024-07-24 23:03:43.547293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547313] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:25.782 [2024-07-24 23:03:43.547324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547339] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:25.782 [2024-07-24 23:03:43.547344] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:25.782 [2024-07-24 23:03:43.547347] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.782 [2024-07-24 23:03:43.547353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547398] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:25.782 [2024-07-24 23:03:43.547402] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:25.782 [2024-07-24 23:03:43.547405] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.782 [2024-07-24 23:03:43.547411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547464] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:25.782 [2024-07-24 23:03:43.547468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:25.782 [2024-07-24 23:03:43.547473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:25.782 [2024-07-24 23:03:43.547491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:25.782 [2024-07-24 23:03:43.547529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:25.782 [2024-07-24 23:03:43.547536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:25.783 [2024-07-24 23:03:43.547556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547568] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:25.783 [2024-07-24 23:03:43.547573] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:25.783 [2024-07-24 23:03:43.547577] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:25.783 [2024-07-24 23:03:43.547580] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:25.783 [2024-07-24 23:03:43.547583] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:25.783 [2024-07-24 23:03:43.547589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:25.783 [2024-07-24 23:03:43.547597] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:25.783 [2024-07-24 23:03:43.547601] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:25.783 [2024-07-24 23:03:43.547604] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.783 [2024-07-24 23:03:43.547610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:25.783 [2024-07-24 23:03:43.547617] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:25.783 [2024-07-24 23:03:43.547622] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:25.783 [2024-07-24 23:03:43.547625] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.783 [2024-07-24 23:03:43.547631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:25.783 [2024-07-24 23:03:43.547638] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:25.783 [2024-07-24 23:03:43.547642] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:25.783 [2024-07-24 23:03:43.547646] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:25.783 [2024-07-24 23:03:43.547653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:25.783 [2024-07-24 23:03:43.547660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:25.783 ===================================================== 00:15:25.783 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.783 ===================================================== 00:15:25.783 Controller Capabilities/Features 00:15:25.783 ================================ 00:15:25.783 Vendor ID: 4e58 00:15:25.783 Subsystem Vendor ID: 4e58 00:15:25.783 Serial Number: SPDK1 00:15:25.783 Model Number: SPDK bdev Controller 00:15:25.783 Firmware Version: 24.09 00:15:25.783 Recommended Arb Burst: 6 00:15:25.783 IEEE OUI Identifier: 8d 6b 50 00:15:25.783 Multi-path I/O 00:15:25.783 May have multiple subsystem ports: Yes 00:15:25.783 May have multiple controllers: Yes 00:15:25.783 Associated with SR-IOV VF: No 00:15:25.783 Max Data Transfer Size: 131072 00:15:25.783 Max Number of Namespaces: 32 00:15:25.783 Max Number of I/O Queues: 127 00:15:25.783 NVMe Specification Version (VS): 1.3 00:15:25.783 NVMe Specification Version (Identify): 1.3 00:15:25.783 Maximum Queue Entries: 256 00:15:25.783 Contiguous Queues Required: Yes 00:15:25.783 Arbitration Mechanisms Supported 00:15:25.783 Weighted Round Robin: Not Supported 00:15:25.783 Vendor Specific: Not Supported 00:15:25.783 Reset Timeout: 15000 ms 00:15:25.783 Doorbell Stride: 4 bytes 00:15:25.783 NVM Subsystem Reset: Not Supported 00:15:25.783 Command Sets Supported 00:15:25.783 NVM Command Set: Supported 00:15:25.783 Boot Partition: Not Supported 00:15:25.783 Memory Page Size Minimum: 4096 bytes 00:15:25.783 Memory Page Size Maximum: 4096 bytes 00:15:25.783 Persistent Memory Region: Not Supported 00:15:25.783 Optional Asynchronous Events Supported 00:15:25.783 Namespace Attribute Notices: Supported 00:15:25.783 Firmware Activation Notices: Not Supported 00:15:25.783 ANA Change Notices: Not Supported 00:15:25.783 PLE Aggregate Log Change Notices: Not Supported 00:15:25.783 LBA Status Info Alert Notices: Not Supported 00:15:25.783 EGE Aggregate Log Change Notices: Not Supported 00:15:25.783 Normal NVM Subsystem Shutdown event: Not Supported 00:15:25.783 Zone Descriptor Change Notices: Not Supported 00:15:25.783 Discovery Log Change Notices: Not Supported 00:15:25.783 Controller Attributes 00:15:25.783 128-bit Host Identifier: Supported 00:15:25.783 Non-Operational Permissive Mode: Not Supported 00:15:25.783 NVM Sets: Not Supported 00:15:25.783 Read Recovery Levels: Not Supported 00:15:25.783 Endurance Groups: Not Supported 00:15:25.783 Predictable Latency Mode: Not Supported 00:15:25.783 Traffic Based Keep ALive: Not Supported 00:15:25.783 Namespace Granularity: Not Supported 00:15:25.783 SQ Associations: Not Supported 00:15:25.783 UUID List: Not Supported 00:15:25.783 Multi-Domain Subsystem: Not Supported 00:15:25.783 Fixed Capacity Management: Not Supported 00:15:25.783 Variable Capacity Management: Not Supported 00:15:25.783 Delete Endurance Group: Not Supported 00:15:25.783 Delete NVM Set: Not Supported 00:15:25.783 Extended LBA Formats Supported: Not Supported 00:15:25.783 Flexible Data Placement Supported: Not Supported 00:15:25.783 00:15:25.783 Controller Memory Buffer Support 00:15:25.783 ================================ 00:15:25.783 Supported: No 00:15:25.783 00:15:25.783 Persistent Memory Region Support 00:15:25.783 ================================ 00:15:25.783 Supported: No 00:15:25.783 00:15:25.783 Admin Command Set Attributes 00:15:25.783 ============================ 00:15:25.783 Security Send/Receive: Not Supported 00:15:25.783 Format NVM: Not Supported 00:15:25.783 Firmware Activate/Download: Not Supported 00:15:25.783 Namespace Management: Not Supported 00:15:25.783 Device Self-Test: Not Supported 00:15:25.783 Directives: Not Supported 00:15:25.783 NVMe-MI: Not Supported 00:15:25.783 Virtualization Management: Not Supported 00:15:25.783 Doorbell Buffer Config: Not Supported 00:15:25.783 Get LBA Status Capability: Not Supported 00:15:25.783 Command & Feature Lockdown Capability: Not Supported 00:15:25.783 Abort Command Limit: 4 00:15:25.783 Async Event Request Limit: 4 00:15:25.783 Number of Firmware Slots: N/A 00:15:25.783 Firmware Slot 1 Read-Only: N/A 00:15:25.783 Firmware Activation Without Reset: N/A 00:15:25.783 Multiple Update Detection Support: N/A 00:15:25.783 Firmware Update Granularity: No Information Provided 00:15:25.783 Per-Namespace SMART Log: No 00:15:25.783 Asymmetric Namespace Access Log Page: Not Supported 00:15:25.783 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:25.783 Command Effects Log Page: Supported 00:15:25.783 Get Log Page Extended Data: Supported 00:15:25.783 Telemetry Log Pages: Not Supported 00:15:25.783 Persistent Event Log Pages: Not Supported 00:15:25.783 Supported Log Pages Log Page: May Support 00:15:25.783 Commands Supported & Effects Log Page: Not Supported 00:15:25.783 Feature Identifiers & Effects Log Page:May Support 00:15:25.783 NVMe-MI Commands & Effects Log Page: May Support 00:15:25.783 Data Area 4 for Telemetry Log: Not Supported 00:15:25.783 Error Log Page Entries Supported: 128 00:15:25.783 Keep Alive: Supported 00:15:25.783 Keep Alive Granularity: 10000 ms 00:15:25.783 00:15:25.783 NVM Command Set Attributes 00:15:25.783 ========================== 00:15:25.783 Submission Queue Entry Size 00:15:25.783 Max: 64 00:15:25.783 Min: 64 00:15:25.783 Completion Queue Entry Size 00:15:25.783 Max: 16 00:15:25.783 Min: 16 00:15:25.783 Number of Namespaces: 32 00:15:25.783 Compare Command: Supported 00:15:25.783 Write Uncorrectable Command: Not Supported 00:15:25.783 Dataset Management Command: Supported 00:15:25.783 Write Zeroes Command: Supported 00:15:25.783 Set Features Save Field: Not Supported 00:15:25.783 Reservations: Not Supported 00:15:25.783 Timestamp: Not Supported 00:15:25.783 Copy: Supported 00:15:25.783 Volatile Write Cache: Present 00:15:25.783 Atomic Write Unit (Normal): 1 00:15:25.783 Atomic Write Unit (PFail): 1 00:15:25.783 Atomic Compare & Write Unit: 1 00:15:25.783 Fused Compare & Write: Supported 00:15:25.783 Scatter-Gather List 00:15:25.783 SGL Command Set: Supported (Dword aligned) 00:15:25.783 SGL Keyed: Not Supported 00:15:25.783 SGL Bit Bucket Descriptor: Not Supported 00:15:25.783 SGL Metadata Pointer: Not Supported 00:15:25.783 Oversized SGL: Not Supported 00:15:25.783 SGL Metadata Address: Not Supported 00:15:25.783 SGL Offset: Not Supported 00:15:25.783 Transport SGL Data Block: Not Supported 00:15:25.783 Replay Protected Memory Block: Not Supported 00:15:25.783 00:15:25.783 Firmware Slot Information 00:15:25.783 ========================= 00:15:25.783 Active slot: 1 00:15:25.783 Slot 1 Firmware Revision: 24.09 00:15:25.783 00:15:25.783 00:15:25.783 Commands Supported and Effects 00:15:25.783 ============================== 00:15:25.783 Admin Commands 00:15:25.783 -------------- 00:15:25.783 Get Log Page (02h): Supported 00:15:25.783 Identify (06h): Supported 00:15:25.783 Abort (08h): Supported 00:15:25.783 Set Features (09h): Supported 00:15:25.783 Get Features (0Ah): Supported 00:15:25.783 Asynchronous Event Request (0Ch): Supported 00:15:25.783 Keep Alive (18h): Supported 00:15:25.783 I/O Commands 00:15:25.783 ------------ 00:15:25.783 Flush (00h): Supported LBA-Change 00:15:25.783 Write (01h): Supported LBA-Change 00:15:25.783 Read (02h): Supported 00:15:25.783 Compare (05h): Supported 00:15:25.783 Write Zeroes (08h): Supported LBA-Change 00:15:25.783 Dataset Management (09h): Supported LBA-Change 00:15:25.783 Copy (19h): Supported LBA-Change 00:15:25.783 00:15:25.783 Error Log 00:15:25.783 ========= 00:15:25.783 00:15:25.783 Arbitration 00:15:25.783 =========== 00:15:25.783 Arbitration Burst: 1 00:15:25.783 00:15:25.783 Power Management 00:15:25.783 ================ 00:15:25.783 Number of Power States: 1 00:15:25.783 Current Power State: Power State #0 00:15:25.783 Power State #0: 00:15:25.783 Max Power: 0.00 W 00:15:25.783 Non-Operational State: Operational 00:15:25.783 Entry Latency: Not Reported 00:15:25.783 Exit Latency: Not Reported 00:15:25.783 Relative Read Throughput: 0 00:15:25.783 Relative Read Latency: 0 00:15:25.783 Relative Write Throughput: 0 00:15:25.783 Relative Write Latency: 0 00:15:25.783 Idle Power: Not Reported 00:15:25.783 Active Power: Not Reported 00:15:25.783 Non-Operational Permissive Mode: Not Supported 00:15:25.783 00:15:25.783 Health Information 00:15:25.783 ================== 00:15:25.783 Critical Warnings: 00:15:25.783 Available Spare Space: OK 00:15:25.783 Temperature: OK 00:15:25.783 Device Reliability: OK 00:15:25.783 Read Only: No 00:15:25.783 Volatile Memory Backup: OK 00:15:25.783 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:25.783 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:25.783 Available Spare: 0% 00:15:25.783 Available Sp[2024-07-24 23:03:43.547796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:25.783 [2024-07-24 23:03:43.547804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547830] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:25.783 [2024-07-24 23:03:43.547839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:25.783 [2024-07-24 23:03:43.547921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:25.783 [2024-07-24 23:03:43.547931] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:25.783 [2024-07-24 23:03:43.548917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.783 [2024-07-24 23:03:43.548957] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:25.783 [2024-07-24 23:03:43.548963] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:25.783 [2024-07-24 23:03:43.549926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:25.783 [2024-07-24 23:03:43.549936] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:25.783 [2024-07-24 23:03:43.549992] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:25.783 [2024-07-24 23:03:43.554758] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.043 are Threshold: 0% 00:15:26.043 Life Percentage Used: 0% 00:15:26.043 Data Units Read: 0 00:15:26.043 Data Units Written: 0 00:15:26.043 Host Read Commands: 0 00:15:26.043 Host Write Commands: 0 00:15:26.043 Controller Busy Time: 0 minutes 00:15:26.043 Power Cycles: 0 00:15:26.043 Power On Hours: 0 hours 00:15:26.043 Unsafe Shutdowns: 0 00:15:26.043 Unrecoverable Media Errors: 0 00:15:26.043 Lifetime Error Log Entries: 0 00:15:26.043 Warning Temperature Time: 0 minutes 00:15:26.043 Critical Temperature Time: 0 minutes 00:15:26.043 00:15:26.043 Number of Queues 00:15:26.043 ================ 00:15:26.043 Number of I/O Submission Queues: 127 00:15:26.043 Number of I/O Completion Queues: 127 00:15:26.043 00:15:26.043 Active Namespaces 00:15:26.043 ================= 00:15:26.043 Namespace ID:1 00:15:26.043 Error Recovery Timeout: Unlimited 00:15:26.043 Command Set Identifier: NVM (00h) 00:15:26.043 Deallocate: Supported 00:15:26.043 Deallocated/Unwritten Error: Not Supported 00:15:26.043 Deallocated Read Value: Unknown 00:15:26.043 Deallocate in Write Zeroes: Not Supported 00:15:26.043 Deallocated Guard Field: 0xFFFF 00:15:26.043 Flush: Supported 00:15:26.043 Reservation: Supported 00:15:26.043 Namespace Sharing Capabilities: Multiple Controllers 00:15:26.043 Size (in LBAs): 131072 (0GiB) 00:15:26.043 Capacity (in LBAs): 131072 (0GiB) 00:15:26.043 Utilization (in LBAs): 131072 (0GiB) 00:15:26.043 NGUID: 6C74304015D34B3F8AD07DC985559B39 00:15:26.043 UUID: 6c743040-15d3-4b3f-8ad0-7dc985559b39 00:15:26.043 Thin Provisioning: Not Supported 00:15:26.043 Per-NS Atomic Units: Yes 00:15:26.043 Atomic Boundary Size (Normal): 0 00:15:26.043 Atomic Boundary Size (PFail): 0 00:15:26.043 Atomic Boundary Offset: 0 00:15:26.043 Maximum Single Source Range Length: 65535 00:15:26.043 Maximum Copy Length: 65535 00:15:26.043 Maximum Source Range Count: 1 00:15:26.043 NGUID/EUI64 Never Reused: No 00:15:26.043 Namespace Write Protected: No 00:15:26.043 Number of LBA Formats: 1 00:15:26.043 Current LBA Format: LBA Format #00 00:15:26.043 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:26.043 00:15:26.043 23:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:26.043 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.043 [2024-07-24 23:03:43.740415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.334 Initializing NVMe Controllers 00:15:31.334 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.334 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:31.334 Initialization complete. Launching workers. 00:15:31.334 ======================================================== 00:15:31.334 Latency(us) 00:15:31.334 Device Information : IOPS MiB/s Average min max 00:15:31.334 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39966.33 156.12 3202.57 847.81 6804.77 00:15:31.334 ======================================================== 00:15:31.334 Total : 39966.33 156.12 3202.57 847.81 6804.77 00:15:31.334 00:15:31.334 [2024-07-24 23:03:48.761875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.334 23:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:31.334 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.334 [2024-07-24 23:03:48.942725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.622 Initializing NVMe Controllers 00:15:36.622 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.622 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:36.622 Initialization complete. Launching workers. 00:15:36.622 ======================================================== 00:15:36.622 Latency(us) 00:15:36.622 Device Information : IOPS MiB/s Average min max 00:15:36.622 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.53 4988.87 10975.06 00:15:36.622 ======================================================== 00:15:36.622 Total : 16051.20 62.70 7980.53 4988.87 10975.06 00:15:36.622 00:15:36.622 [2024-07-24 23:03:53.975744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.622 23:03:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:36.622 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.622 [2024-07-24 23:03:54.179621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.910 [2024-07-24 23:03:59.282084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.910 Initializing NVMe Controllers 00:15:41.910 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:41.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:41.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:41.910 Initialization complete. Launching workers. 00:15:41.910 Starting thread on core 2 00:15:41.910 Starting thread on core 3 00:15:41.910 Starting thread on core 1 00:15:41.910 23:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:41.910 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.910 [2024-07-24 23:03:59.550167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.209 [2024-07-24 23:04:02.617376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.209 Initializing NVMe Controllers 00:15:45.209 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.209 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:45.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:45.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:45.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:45.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:45.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:45.210 Initialization complete. Launching workers. 00:15:45.210 Starting thread on core 1 with urgent priority queue 00:15:45.210 Starting thread on core 2 with urgent priority queue 00:15:45.210 Starting thread on core 3 with urgent priority queue 00:15:45.210 Starting thread on core 0 with urgent priority queue 00:15:45.210 SPDK bdev Controller (SPDK1 ) core 0: 8214.67 IO/s 12.17 secs/100000 ios 00:15:45.210 SPDK bdev Controller (SPDK1 ) core 1: 8154.00 IO/s 12.26 secs/100000 ios 00:15:45.210 SPDK bdev Controller (SPDK1 ) core 2: 10414.33 IO/s 9.60 secs/100000 ios 00:15:45.210 SPDK bdev Controller (SPDK1 ) core 3: 8164.67 IO/s 12.25 secs/100000 ios 00:15:45.210 ======================================================== 00:15:45.210 00:15:45.210 23:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:45.210 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.210 [2024-07-24 23:04:02.894206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.210 Initializing NVMe Controllers 00:15:45.210 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.210 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:45.210 Namespace ID: 1 size: 0GB 00:15:45.210 Initialization complete. 00:15:45.210 INFO: using host memory buffer for IO 00:15:45.210 Hello world! 00:15:45.210 [2024-07-24 23:04:02.928399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.210 23:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:45.470 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.470 [2024-07-24 23:04:03.201187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:46.861 Initializing NVMe Controllers 00:15:46.861 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:46.861 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:46.861 Initialization complete. Launching workers. 00:15:46.861 submit (in ns) avg, min, max = 9124.7, 3917.5, 4001035.0 00:15:46.861 complete (in ns) avg, min, max = 16495.4, 2371.7, 3998532.5 00:15:46.861 00:15:46.861 Submit histogram 00:15:46.861 ================ 00:15:46.861 Range in us Cumulative Count 00:15:46.861 3.893 - 3.920: 0.0052% ( 1) 00:15:46.861 3.920 - 3.947: 0.2302% ( 43) 00:15:46.861 3.947 - 3.973: 1.8887% ( 317) 00:15:46.861 3.973 - 4.000: 8.1249% ( 1192) 00:15:46.861 4.000 - 4.027: 17.6415% ( 1819) 00:15:46.861 4.027 - 4.053: 30.4437% ( 2447) 00:15:46.861 4.053 - 4.080: 42.9842% ( 2397) 00:15:46.861 4.080 - 4.107: 56.8641% ( 2653) 00:15:46.861 4.107 - 4.133: 73.5063% ( 3181) 00:15:46.861 4.133 - 4.160: 87.7367% ( 2720) 00:15:46.861 4.160 - 4.187: 95.5007% ( 1484) 00:15:46.861 4.187 - 4.213: 98.5822% ( 589) 00:15:46.861 4.213 - 4.240: 99.3460% ( 146) 00:15:46.861 4.240 - 4.267: 99.4245% ( 15) 00:15:46.861 4.267 - 4.293: 99.4402% ( 3) 00:15:46.861 4.480 - 4.507: 99.4507% ( 2) 00:15:46.861 4.507 - 4.533: 99.4559% ( 1) 00:15:46.861 4.587 - 4.613: 99.4611% ( 1) 00:15:46.861 4.747 - 4.773: 99.4664% ( 1) 00:15:46.861 4.987 - 5.013: 99.4716% ( 1) 00:15:46.861 5.147 - 5.173: 99.4768% ( 1) 00:15:46.861 5.307 - 5.333: 99.4821% ( 1) 00:15:46.861 5.493 - 5.520: 99.4873% ( 1) 00:15:46.861 5.600 - 5.627: 99.4925% ( 1) 00:15:46.861 5.813 - 5.840: 99.4978% ( 1) 00:15:46.861 5.920 - 5.947: 99.5082% ( 2) 00:15:46.861 6.000 - 6.027: 99.5134% ( 1) 00:15:46.861 6.053 - 6.080: 99.5187% ( 1) 00:15:46.861 6.080 - 6.107: 99.5291% ( 2) 00:15:46.861 6.133 - 6.160: 99.5344% ( 1) 00:15:46.861 6.187 - 6.213: 99.5396% ( 1) 00:15:46.861 6.213 - 6.240: 99.5501% ( 2) 00:15:46.861 6.267 - 6.293: 99.5605% ( 2) 00:15:46.861 6.320 - 6.347: 99.5710% ( 2) 00:15:46.861 6.373 - 6.400: 99.5762% ( 1) 00:15:46.861 6.427 - 6.453: 99.5815% ( 1) 00:15:46.861 6.453 - 6.480: 99.6024% ( 4) 00:15:46.861 6.480 - 6.507: 99.6076% ( 1) 00:15:46.861 6.507 - 6.533: 99.6128% ( 1) 00:15:46.861 6.533 - 6.560: 99.6338% ( 4) 00:15:46.861 6.560 - 6.587: 99.6390% ( 1) 00:15:46.861 6.587 - 6.613: 99.6442% ( 1) 00:15:46.861 6.640 - 6.667: 99.6652% ( 4) 00:15:46.861 6.667 - 6.693: 99.6756% ( 2) 00:15:46.861 6.693 - 6.720: 99.6809% ( 1) 00:15:46.861 6.720 - 6.747: 99.6966% ( 3) 00:15:46.861 6.747 - 6.773: 99.7018% ( 1) 00:15:46.861 6.773 - 6.800: 99.7123% ( 2) 00:15:46.861 6.800 - 6.827: 99.7227% ( 2) 00:15:46.861 6.827 - 6.880: 99.7436% ( 4) 00:15:46.861 6.880 - 6.933: 99.7541% ( 2) 00:15:46.861 6.933 - 6.987: 99.7750% ( 4) 00:15:46.861 6.987 - 7.040: 99.7855% ( 2) 00:15:46.861 7.040 - 7.093: 99.7907% ( 1) 00:15:46.861 7.093 - 7.147: 99.8012% ( 2) 00:15:46.861 7.147 - 7.200: 99.8117% ( 2) 00:15:46.861 7.253 - 7.307: 99.8169% ( 1) 00:15:46.861 7.413 - 7.467: 99.8221% ( 1) 00:15:46.861 7.627 - 7.680: 99.8274% ( 1) 00:15:46.861 7.680 - 7.733: 99.8326% ( 1) 00:15:46.861 7.733 - 7.787: 99.8378% ( 1) 00:15:46.861 8.160 - 8.213: 99.8430% ( 1) 00:15:46.861 8.267 - 8.320: 99.8483% ( 1) 00:15:46.861 8.640 - 8.693: 99.8535% ( 1) 00:15:46.861 9.653 - 9.707: 99.8587% ( 1) 00:15:46.861 12.800 - 12.853: 99.8640% ( 1) 00:15:46.861 13.067 - 13.120: 99.8692% ( 1) 00:15:46.861 179.200 - 180.053: 99.8744% ( 1) 00:15:46.861 3986.773 - 4014.080: 100.0000% ( 24) 00:15:46.861 00:15:46.861 Complete histogram 00:15:46.861 ================== 00:15:46.861 Range in us Cumulative Count 00:15:46.861 2.360 - 2.373: 0.0052% ( 1) 00:15:46.861 2.373 - 2.387: 0.1203% ( 22) 00:15:46.861 2.387 - 2.400: 1.0516% ( 178) 00:15:46.861 2.400 - 2.413: 1.1667% ( 22) 00:15:46.861 2.413 - 2.427: 1.3550% ( 36) 00:15:46.861 2.427 - 2.440: 1.3864% ( 6) 00:15:46.861 2.440 - 2.453: 2.8461% ( 279) 00:15:46.861 2.453 - 2.467: 40.9490% ( 7283) 00:15:46.861 2.467 - [2024-07-24 23:04:04.227714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:46.861 2.480: 53.9081% ( 2477) 00:15:46.861 2.480 - 2.493: 71.0003% ( 3267) 00:15:46.861 2.493 - 2.507: 78.9003% ( 1510) 00:15:46.861 2.507 - 2.520: 81.2441% ( 448) 00:15:46.861 2.520 - 2.533: 85.0947% ( 736) 00:15:46.861 2.533 - 2.547: 90.5200% ( 1037) 00:15:46.861 2.547 - 2.560: 94.3759% ( 737) 00:15:46.861 2.560 - 2.573: 97.1435% ( 529) 00:15:46.861 2.573 - 2.587: 98.7182% ( 301) 00:15:46.861 2.587 - 2.600: 99.2728% ( 106) 00:15:46.861 2.600 - 2.613: 99.4088% ( 26) 00:15:46.861 2.613 - 2.627: 99.4350% ( 5) 00:15:46.861 2.627 - 2.640: 99.4454% ( 2) 00:15:46.861 4.480 - 4.507: 99.4507% ( 1) 00:15:46.861 4.587 - 4.613: 99.4559% ( 1) 00:15:46.861 4.720 - 4.747: 99.4611% ( 1) 00:15:46.861 4.773 - 4.800: 99.4664% ( 1) 00:15:46.861 4.800 - 4.827: 99.4768% ( 2) 00:15:46.861 4.827 - 4.853: 99.4821% ( 1) 00:15:46.861 4.853 - 4.880: 99.4873% ( 1) 00:15:46.861 4.880 - 4.907: 99.4925% ( 1) 00:15:46.861 4.987 - 5.013: 99.4978% ( 1) 00:15:46.861 5.013 - 5.040: 99.5082% ( 2) 00:15:46.861 5.040 - 5.067: 99.5134% ( 1) 00:15:46.861 5.093 - 5.120: 99.5187% ( 1) 00:15:46.861 5.120 - 5.147: 99.5239% ( 1) 00:15:46.861 5.147 - 5.173: 99.5291% ( 1) 00:15:46.861 5.173 - 5.200: 99.5344% ( 1) 00:15:46.861 5.200 - 5.227: 99.5396% ( 1) 00:15:46.861 5.253 - 5.280: 99.5448% ( 1) 00:15:46.861 5.280 - 5.307: 99.5553% ( 2) 00:15:46.861 5.307 - 5.333: 99.5658% ( 2) 00:15:46.861 5.333 - 5.360: 99.5710% ( 1) 00:15:46.861 5.360 - 5.387: 99.5762% ( 1) 00:15:46.861 5.493 - 5.520: 99.5815% ( 1) 00:15:46.861 5.627 - 5.653: 99.5867% ( 1) 00:15:46.861 5.707 - 5.733: 99.5972% ( 2) 00:15:46.861 5.813 - 5.840: 99.6024% ( 1) 00:15:46.861 6.240 - 6.267: 99.6076% ( 1) 00:15:46.861 6.480 - 6.507: 99.6128% ( 1) 00:15:46.861 6.533 - 6.560: 99.6181% ( 1) 00:15:46.861 6.933 - 6.987: 99.6233% ( 1) 00:15:46.861 6.987 - 7.040: 99.6285% ( 1) 00:15:46.861 11.147 - 11.200: 99.6338% ( 1) 00:15:46.861 11.787 - 11.840: 99.6390% ( 1) 00:15:46.862 14.187 - 14.293: 99.6442% ( 1) 00:15:46.862 43.947 - 44.160: 99.6495% ( 1) 00:15:46.862 3986.773 - 4014.080: 100.0000% ( 67) 00:15:46.862 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:46.862 [ 00:15:46.862 { 00:15:46.862 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:46.862 "subtype": "Discovery", 00:15:46.862 "listen_addresses": [], 00:15:46.862 "allow_any_host": true, 00:15:46.862 "hosts": [] 00:15:46.862 }, 00:15:46.862 { 00:15:46.862 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:46.862 "subtype": "NVMe", 00:15:46.862 "listen_addresses": [ 00:15:46.862 { 00:15:46.862 "trtype": "VFIOUSER", 00:15:46.862 "adrfam": "IPv4", 00:15:46.862 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:46.862 "trsvcid": "0" 00:15:46.862 } 00:15:46.862 ], 00:15:46.862 "allow_any_host": true, 00:15:46.862 "hosts": [], 00:15:46.862 "serial_number": "SPDK1", 00:15:46.862 "model_number": "SPDK bdev Controller", 00:15:46.862 "max_namespaces": 32, 00:15:46.862 "min_cntlid": 1, 00:15:46.862 "max_cntlid": 65519, 00:15:46.862 "namespaces": [ 00:15:46.862 { 00:15:46.862 "nsid": 1, 00:15:46.862 "bdev_name": "Malloc1", 00:15:46.862 "name": "Malloc1", 00:15:46.862 "nguid": "6C74304015D34B3F8AD07DC985559B39", 00:15:46.862 "uuid": "6c743040-15d3-4b3f-8ad0-7dc985559b39" 00:15:46.862 } 00:15:46.862 ] 00:15:46.862 }, 00:15:46.862 { 00:15:46.862 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:46.862 "subtype": "NVMe", 00:15:46.862 "listen_addresses": [ 00:15:46.862 { 00:15:46.862 "trtype": "VFIOUSER", 00:15:46.862 "adrfam": "IPv4", 00:15:46.862 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:46.862 "trsvcid": "0" 00:15:46.862 } 00:15:46.862 ], 00:15:46.862 "allow_any_host": true, 00:15:46.862 "hosts": [], 00:15:46.862 "serial_number": "SPDK2", 00:15:46.862 "model_number": "SPDK bdev Controller", 00:15:46.862 "max_namespaces": 32, 00:15:46.862 "min_cntlid": 1, 00:15:46.862 "max_cntlid": 65519, 00:15:46.862 "namespaces": [ 00:15:46.862 { 00:15:46.862 "nsid": 1, 00:15:46.862 "bdev_name": "Malloc2", 00:15:46.862 "name": "Malloc2", 00:15:46.862 "nguid": "F1644C4D66584508A8C691F6B8E8B614", 00:15:46.862 "uuid": "f1644c4d-6658-4508-a8c6-91f6b8e8b614" 00:15:46.862 } 00:15:46.862 ] 00:15:46.862 } 00:15:46.862 ] 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=822361 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:46.862 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.862 Malloc3 00:15:46.862 [2024-07-24 23:04:04.616208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:46.862 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:47.124 [2024-07-24 23:04:04.788280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.124 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:47.124 Asynchronous Event Request test 00:15:47.124 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.124 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.124 Registering asynchronous event callbacks... 00:15:47.124 Starting namespace attribute notice tests for all controllers... 00:15:47.124 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:47.124 aer_cb - Changed Namespace 00:15:47.124 Cleaning up... 00:15:47.386 [ 00:15:47.386 { 00:15:47.386 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:47.386 "subtype": "Discovery", 00:15:47.386 "listen_addresses": [], 00:15:47.386 "allow_any_host": true, 00:15:47.386 "hosts": [] 00:15:47.386 }, 00:15:47.386 { 00:15:47.386 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:47.386 "subtype": "NVMe", 00:15:47.386 "listen_addresses": [ 00:15:47.386 { 00:15:47.386 "trtype": "VFIOUSER", 00:15:47.386 "adrfam": "IPv4", 00:15:47.386 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:47.386 "trsvcid": "0" 00:15:47.386 } 00:15:47.386 ], 00:15:47.386 "allow_any_host": true, 00:15:47.386 "hosts": [], 00:15:47.386 "serial_number": "SPDK1", 00:15:47.386 "model_number": "SPDK bdev Controller", 00:15:47.386 "max_namespaces": 32, 00:15:47.386 "min_cntlid": 1, 00:15:47.386 "max_cntlid": 65519, 00:15:47.386 "namespaces": [ 00:15:47.386 { 00:15:47.386 "nsid": 1, 00:15:47.386 "bdev_name": "Malloc1", 00:15:47.386 "name": "Malloc1", 00:15:47.386 "nguid": "6C74304015D34B3F8AD07DC985559B39", 00:15:47.386 "uuid": "6c743040-15d3-4b3f-8ad0-7dc985559b39" 00:15:47.386 }, 00:15:47.386 { 00:15:47.386 "nsid": 2, 00:15:47.386 "bdev_name": "Malloc3", 00:15:47.386 "name": "Malloc3", 00:15:47.386 "nguid": "CBC4C7521B42468DA27179996E138279", 00:15:47.386 "uuid": "cbc4c752-1b42-468d-a271-79996e138279" 00:15:47.386 } 00:15:47.386 ] 00:15:47.386 }, 00:15:47.386 { 00:15:47.386 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:47.386 "subtype": "NVMe", 00:15:47.386 "listen_addresses": [ 00:15:47.386 { 00:15:47.386 "trtype": "VFIOUSER", 00:15:47.386 "adrfam": "IPv4", 00:15:47.386 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:47.386 "trsvcid": "0" 00:15:47.386 } 00:15:47.386 ], 00:15:47.386 "allow_any_host": true, 00:15:47.386 "hosts": [], 00:15:47.386 "serial_number": "SPDK2", 00:15:47.386 "model_number": "SPDK bdev Controller", 00:15:47.386 "max_namespaces": 32, 00:15:47.386 "min_cntlid": 1, 00:15:47.386 "max_cntlid": 65519, 00:15:47.386 "namespaces": [ 00:15:47.386 { 00:15:47.386 "nsid": 1, 00:15:47.386 "bdev_name": "Malloc2", 00:15:47.386 "name": "Malloc2", 00:15:47.386 "nguid": "F1644C4D66584508A8C691F6B8E8B614", 00:15:47.386 "uuid": "f1644c4d-6658-4508-a8c6-91f6b8e8b614" 00:15:47.386 } 00:15:47.386 ] 00:15:47.386 } 00:15:47.386 ] 00:15:47.386 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 822361 00:15:47.386 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.386 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:47.386 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:47.386 23:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:47.386 [2024-07-24 23:04:05.007567] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:15:47.386 [2024-07-24 23:04:05.007635] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822627 ] 00:15:47.386 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.386 [2024-07-24 23:04:05.041302] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:47.386 [2024-07-24 23:04:05.046526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:47.386 [2024-07-24 23:04:05.046548] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3611961000 00:15:47.386 [2024-07-24 23:04:05.047522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.048533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.049539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.050545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.051552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.052562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.053567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.054575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:47.386 [2024-07-24 23:04:05.055581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:47.386 [2024-07-24 23:04:05.055593] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3611956000 00:15:47.386 [2024-07-24 23:04:05.056922] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:47.386 [2024-07-24 23:04:05.073136] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:47.386 [2024-07-24 23:04:05.073159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:47.386 [2024-07-24 23:04:05.078240] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:47.386 [2024-07-24 23:04:05.078288] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:47.386 [2024-07-24 23:04:05.078369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:47.386 [2024-07-24 23:04:05.078381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:47.386 [2024-07-24 23:04:05.078387] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:47.387 [2024-07-24 23:04:05.079241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:47.387 [2024-07-24 23:04:05.079254] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:47.387 [2024-07-24 23:04:05.079262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:47.387 [2024-07-24 23:04:05.080245] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:47.387 [2024-07-24 23:04:05.080253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:47.387 [2024-07-24 23:04:05.080261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.081253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:47.387 [2024-07-24 23:04:05.081262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.082262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:47.387 [2024-07-24 23:04:05.082270] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:47.387 [2024-07-24 23:04:05.082275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.082282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.082387] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:47.387 [2024-07-24 23:04:05.082391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.082396] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:47.387 [2024-07-24 23:04:05.083268] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:47.387 [2024-07-24 23:04:05.084270] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:47.387 [2024-07-24 23:04:05.085283] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:47.387 [2024-07-24 23:04:05.086281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:47.387 [2024-07-24 23:04:05.086319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:47.387 [2024-07-24 23:04:05.087293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:47.387 [2024-07-24 23:04:05.087301] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:47.387 [2024-07-24 23:04:05.087306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.087327] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:47.387 [2024-07-24 23:04:05.087338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.087350] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:47.387 [2024-07-24 23:04:05.087355] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:47.387 [2024-07-24 23:04:05.087359] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.387 [2024-07-24 23:04:05.087371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.093758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.093769] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:47.387 [2024-07-24 23:04:05.093774] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:47.387 [2024-07-24 23:04:05.093778] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:47.387 [2024-07-24 23:04:05.093783] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:47.387 [2024-07-24 23:04:05.093788] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:47.387 [2024-07-24 23:04:05.093792] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:47.387 [2024-07-24 23:04:05.093797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.093804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.093816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.101759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.101774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.387 [2024-07-24 23:04:05.101784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.387 [2024-07-24 23:04:05.101793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.387 [2024-07-24 23:04:05.101801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.387 [2024-07-24 23:04:05.101806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.101814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.101823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.109756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.109763] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:47.387 [2024-07-24 23:04:05.109768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.109777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.109782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.109791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.117766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.117831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.117839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.117846] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:47.387 [2024-07-24 23:04:05.117851] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:47.387 [2024-07-24 23:04:05.117854] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.387 [2024-07-24 23:04:05.117860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.125757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.125768] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:47.387 [2024-07-24 23:04:05.125776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.125784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.125791] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:47.387 [2024-07-24 23:04:05.125795] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:47.387 [2024-07-24 23:04:05.125799] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.387 [2024-07-24 23:04:05.125807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:47.387 [2024-07-24 23:04:05.133757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:47.387 [2024-07-24 23:04:05.133770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.133778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:47.387 [2024-07-24 23:04:05.133785] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:47.387 [2024-07-24 23:04:05.133789] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:47.387 [2024-07-24 23:04:05.133793] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.387 [2024-07-24 23:04:05.133799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:47.388 [2024-07-24 23:04:05.141756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:47.388 [2024-07-24 23:04:05.141765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141803] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:47.388 [2024-07-24 23:04:05.141808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:47.388 [2024-07-24 23:04:05.141813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:47.388 [2024-07-24 23:04:05.141829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:47.388 [2024-07-24 23:04:05.149758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:47.388 [2024-07-24 23:04:05.149771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:47.388 [2024-07-24 23:04:05.157756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:47.388 [2024-07-24 23:04:05.157770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:47.388 [2024-07-24 23:04:05.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:47.388 [2024-07-24 23:04:05.165772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:47.649 [2024-07-24 23:04:05.173757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:47.649 [2024-07-24 23:04:05.173774] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:47.649 [2024-07-24 23:04:05.173779] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:47.649 [2024-07-24 23:04:05.173782] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:47.649 [2024-07-24 23:04:05.173786] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:47.649 [2024-07-24 23:04:05.173789] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:47.649 [2024-07-24 23:04:05.173795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:47.649 [2024-07-24 23:04:05.173803] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:47.649 [2024-07-24 23:04:05.173807] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:47.649 [2024-07-24 23:04:05.173811] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.649 [2024-07-24 23:04:05.173817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:47.649 [2024-07-24 23:04:05.173824] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:47.650 [2024-07-24 23:04:05.173829] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:47.650 [2024-07-24 23:04:05.173832] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.650 [2024-07-24 23:04:05.173838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:47.650 [2024-07-24 23:04:05.173846] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:47.650 [2024-07-24 23:04:05.173850] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:47.650 [2024-07-24 23:04:05.173854] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:47.650 [2024-07-24 23:04:05.173860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:47.650 [2024-07-24 23:04:05.181758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:47.650 [2024-07-24 23:04:05.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:47.650 [2024-07-24 23:04:05.181782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:47.650 [2024-07-24 23:04:05.181789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:47.650 ===================================================== 00:15:47.650 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.650 ===================================================== 00:15:47.650 Controller Capabilities/Features 00:15:47.650 ================================ 00:15:47.650 Vendor ID: 4e58 00:15:47.650 Subsystem Vendor ID: 4e58 00:15:47.650 Serial Number: SPDK2 00:15:47.650 Model Number: SPDK bdev Controller 00:15:47.650 Firmware Version: 24.09 00:15:47.650 Recommended Arb Burst: 6 00:15:47.650 IEEE OUI Identifier: 8d 6b 50 00:15:47.650 Multi-path I/O 00:15:47.650 May have multiple subsystem ports: Yes 00:15:47.650 May have multiple controllers: Yes 00:15:47.650 Associated with SR-IOV VF: No 00:15:47.650 Max Data Transfer Size: 131072 00:15:47.650 Max Number of Namespaces: 32 00:15:47.650 Max Number of I/O Queues: 127 00:15:47.650 NVMe Specification Version (VS): 1.3 00:15:47.650 NVMe Specification Version (Identify): 1.3 00:15:47.650 Maximum Queue Entries: 256 00:15:47.650 Contiguous Queues Required: Yes 00:15:47.650 Arbitration Mechanisms Supported 00:15:47.650 Weighted Round Robin: Not Supported 00:15:47.650 Vendor Specific: Not Supported 00:15:47.650 Reset Timeout: 15000 ms 00:15:47.650 Doorbell Stride: 4 bytes 00:15:47.650 NVM Subsystem Reset: Not Supported 00:15:47.650 Command Sets Supported 00:15:47.650 NVM Command Set: Supported 00:15:47.650 Boot Partition: Not Supported 00:15:47.650 Memory Page Size Minimum: 4096 bytes 00:15:47.650 Memory Page Size Maximum: 4096 bytes 00:15:47.650 Persistent Memory Region: Not Supported 00:15:47.650 Optional Asynchronous Events Supported 00:15:47.650 Namespace Attribute Notices: Supported 00:15:47.650 Firmware Activation Notices: Not Supported 00:15:47.650 ANA Change Notices: Not Supported 00:15:47.650 PLE Aggregate Log Change Notices: Not Supported 00:15:47.650 LBA Status Info Alert Notices: Not Supported 00:15:47.650 EGE Aggregate Log Change Notices: Not Supported 00:15:47.650 Normal NVM Subsystem Shutdown event: Not Supported 00:15:47.650 Zone Descriptor Change Notices: Not Supported 00:15:47.650 Discovery Log Change Notices: Not Supported 00:15:47.650 Controller Attributes 00:15:47.650 128-bit Host Identifier: Supported 00:15:47.650 Non-Operational Permissive Mode: Not Supported 00:15:47.650 NVM Sets: Not Supported 00:15:47.650 Read Recovery Levels: Not Supported 00:15:47.650 Endurance Groups: Not Supported 00:15:47.650 Predictable Latency Mode: Not Supported 00:15:47.650 Traffic Based Keep ALive: Not Supported 00:15:47.650 Namespace Granularity: Not Supported 00:15:47.650 SQ Associations: Not Supported 00:15:47.650 UUID List: Not Supported 00:15:47.650 Multi-Domain Subsystem: Not Supported 00:15:47.650 Fixed Capacity Management: Not Supported 00:15:47.650 Variable Capacity Management: Not Supported 00:15:47.650 Delete Endurance Group: Not Supported 00:15:47.650 Delete NVM Set: Not Supported 00:15:47.650 Extended LBA Formats Supported: Not Supported 00:15:47.650 Flexible Data Placement Supported: Not Supported 00:15:47.650 00:15:47.650 Controller Memory Buffer Support 00:15:47.650 ================================ 00:15:47.650 Supported: No 00:15:47.650 00:15:47.650 Persistent Memory Region Support 00:15:47.650 ================================ 00:15:47.650 Supported: No 00:15:47.650 00:15:47.650 Admin Command Set Attributes 00:15:47.650 ============================ 00:15:47.650 Security Send/Receive: Not Supported 00:15:47.650 Format NVM: Not Supported 00:15:47.650 Firmware Activate/Download: Not Supported 00:15:47.650 Namespace Management: Not Supported 00:15:47.650 Device Self-Test: Not Supported 00:15:47.650 Directives: Not Supported 00:15:47.650 NVMe-MI: Not Supported 00:15:47.650 Virtualization Management: Not Supported 00:15:47.650 Doorbell Buffer Config: Not Supported 00:15:47.650 Get LBA Status Capability: Not Supported 00:15:47.650 Command & Feature Lockdown Capability: Not Supported 00:15:47.650 Abort Command Limit: 4 00:15:47.650 Async Event Request Limit: 4 00:15:47.650 Number of Firmware Slots: N/A 00:15:47.650 Firmware Slot 1 Read-Only: N/A 00:15:47.650 Firmware Activation Without Reset: N/A 00:15:47.650 Multiple Update Detection Support: N/A 00:15:47.650 Firmware Update Granularity: No Information Provided 00:15:47.650 Per-Namespace SMART Log: No 00:15:47.650 Asymmetric Namespace Access Log Page: Not Supported 00:15:47.650 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:47.650 Command Effects Log Page: Supported 00:15:47.650 Get Log Page Extended Data: Supported 00:15:47.650 Telemetry Log Pages: Not Supported 00:15:47.650 Persistent Event Log Pages: Not Supported 00:15:47.650 Supported Log Pages Log Page: May Support 00:15:47.650 Commands Supported & Effects Log Page: Not Supported 00:15:47.650 Feature Identifiers & Effects Log Page:May Support 00:15:47.650 NVMe-MI Commands & Effects Log Page: May Support 00:15:47.650 Data Area 4 for Telemetry Log: Not Supported 00:15:47.650 Error Log Page Entries Supported: 128 00:15:47.650 Keep Alive: Supported 00:15:47.650 Keep Alive Granularity: 10000 ms 00:15:47.650 00:15:47.650 NVM Command Set Attributes 00:15:47.650 ========================== 00:15:47.650 Submission Queue Entry Size 00:15:47.650 Max: 64 00:15:47.650 Min: 64 00:15:47.650 Completion Queue Entry Size 00:15:47.650 Max: 16 00:15:47.650 Min: 16 00:15:47.650 Number of Namespaces: 32 00:15:47.650 Compare Command: Supported 00:15:47.650 Write Uncorrectable Command: Not Supported 00:15:47.650 Dataset Management Command: Supported 00:15:47.650 Write Zeroes Command: Supported 00:15:47.650 Set Features Save Field: Not Supported 00:15:47.650 Reservations: Not Supported 00:15:47.650 Timestamp: Not Supported 00:15:47.650 Copy: Supported 00:15:47.650 Volatile Write Cache: Present 00:15:47.650 Atomic Write Unit (Normal): 1 00:15:47.650 Atomic Write Unit (PFail): 1 00:15:47.650 Atomic Compare & Write Unit: 1 00:15:47.650 Fused Compare & Write: Supported 00:15:47.650 Scatter-Gather List 00:15:47.650 SGL Command Set: Supported (Dword aligned) 00:15:47.650 SGL Keyed: Not Supported 00:15:47.650 SGL Bit Bucket Descriptor: Not Supported 00:15:47.650 SGL Metadata Pointer: Not Supported 00:15:47.650 Oversized SGL: Not Supported 00:15:47.650 SGL Metadata Address: Not Supported 00:15:47.650 SGL Offset: Not Supported 00:15:47.650 Transport SGL Data Block: Not Supported 00:15:47.650 Replay Protected Memory Block: Not Supported 00:15:47.650 00:15:47.650 Firmware Slot Information 00:15:47.650 ========================= 00:15:47.650 Active slot: 1 00:15:47.650 Slot 1 Firmware Revision: 24.09 00:15:47.650 00:15:47.650 00:15:47.650 Commands Supported and Effects 00:15:47.650 ============================== 00:15:47.650 Admin Commands 00:15:47.650 -------------- 00:15:47.650 Get Log Page (02h): Supported 00:15:47.650 Identify (06h): Supported 00:15:47.650 Abort (08h): Supported 00:15:47.650 Set Features (09h): Supported 00:15:47.650 Get Features (0Ah): Supported 00:15:47.650 Asynchronous Event Request (0Ch): Supported 00:15:47.650 Keep Alive (18h): Supported 00:15:47.650 I/O Commands 00:15:47.650 ------------ 00:15:47.650 Flush (00h): Supported LBA-Change 00:15:47.650 Write (01h): Supported LBA-Change 00:15:47.650 Read (02h): Supported 00:15:47.650 Compare (05h): Supported 00:15:47.650 Write Zeroes (08h): Supported LBA-Change 00:15:47.650 Dataset Management (09h): Supported LBA-Change 00:15:47.650 Copy (19h): Supported LBA-Change 00:15:47.650 00:15:47.651 Error Log 00:15:47.651 ========= 00:15:47.651 00:15:47.651 Arbitration 00:15:47.651 =========== 00:15:47.651 Arbitration Burst: 1 00:15:47.651 00:15:47.651 Power Management 00:15:47.651 ================ 00:15:47.651 Number of Power States: 1 00:15:47.651 Current Power State: Power State #0 00:15:47.651 Power State #0: 00:15:47.651 Max Power: 0.00 W 00:15:47.651 Non-Operational State: Operational 00:15:47.651 Entry Latency: Not Reported 00:15:47.651 Exit Latency: Not Reported 00:15:47.651 Relative Read Throughput: 0 00:15:47.651 Relative Read Latency: 0 00:15:47.651 Relative Write Throughput: 0 00:15:47.651 Relative Write Latency: 0 00:15:47.651 Idle Power: Not Reported 00:15:47.651 Active Power: Not Reported 00:15:47.651 Non-Operational Permissive Mode: Not Supported 00:15:47.651 00:15:47.651 Health Information 00:15:47.651 ================== 00:15:47.651 Critical Warnings: 00:15:47.651 Available Spare Space: OK 00:15:47.651 Temperature: OK 00:15:47.651 Device Reliability: OK 00:15:47.651 Read Only: No 00:15:47.651 Volatile Memory Backup: OK 00:15:47.651 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:47.651 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:47.651 Available Spare: 0% 00:15:47.651 Available Sp[2024-07-24 23:04:05.181891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:47.651 [2024-07-24 23:04:05.189757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:47.651 [2024-07-24 23:04:05.189786] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:47.651 [2024-07-24 23:04:05.189796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.651 [2024-07-24 23:04:05.189802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.651 [2024-07-24 23:04:05.189809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.651 [2024-07-24 23:04:05.189817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.651 [2024-07-24 23:04:05.189867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:47.651 [2024-07-24 23:04:05.189877] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:47.651 [2024-07-24 23:04:05.190868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:47.651 [2024-07-24 23:04:05.190916] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:47.651 [2024-07-24 23:04:05.190922] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:47.651 [2024-07-24 23:04:05.191880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:47.651 [2024-07-24 23:04:05.191891] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:47.651 [2024-07-24 23:04:05.191942] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:47.651 [2024-07-24 23:04:05.193319] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:47.651 are Threshold: 0% 00:15:47.651 Life Percentage Used: 0% 00:15:47.651 Data Units Read: 0 00:15:47.651 Data Units Written: 0 00:15:47.651 Host Read Commands: 0 00:15:47.651 Host Write Commands: 0 00:15:47.651 Controller Busy Time: 0 minutes 00:15:47.651 Power Cycles: 0 00:15:47.651 Power On Hours: 0 hours 00:15:47.651 Unsafe Shutdowns: 0 00:15:47.651 Unrecoverable Media Errors: 0 00:15:47.651 Lifetime Error Log Entries: 0 00:15:47.651 Warning Temperature Time: 0 minutes 00:15:47.651 Critical Temperature Time: 0 minutes 00:15:47.651 00:15:47.651 Number of Queues 00:15:47.651 ================ 00:15:47.651 Number of I/O Submission Queues: 127 00:15:47.651 Number of I/O Completion Queues: 127 00:15:47.651 00:15:47.651 Active Namespaces 00:15:47.651 ================= 00:15:47.651 Namespace ID:1 00:15:47.651 Error Recovery Timeout: Unlimited 00:15:47.651 Command Set Identifier: NVM (00h) 00:15:47.651 Deallocate: Supported 00:15:47.651 Deallocated/Unwritten Error: Not Supported 00:15:47.651 Deallocated Read Value: Unknown 00:15:47.651 Deallocate in Write Zeroes: Not Supported 00:15:47.651 Deallocated Guard Field: 0xFFFF 00:15:47.651 Flush: Supported 00:15:47.651 Reservation: Supported 00:15:47.651 Namespace Sharing Capabilities: Multiple Controllers 00:15:47.651 Size (in LBAs): 131072 (0GiB) 00:15:47.651 Capacity (in LBAs): 131072 (0GiB) 00:15:47.651 Utilization (in LBAs): 131072 (0GiB) 00:15:47.651 NGUID: F1644C4D66584508A8C691F6B8E8B614 00:15:47.651 UUID: f1644c4d-6658-4508-a8c6-91f6b8e8b614 00:15:47.651 Thin Provisioning: Not Supported 00:15:47.651 Per-NS Atomic Units: Yes 00:15:47.651 Atomic Boundary Size (Normal): 0 00:15:47.651 Atomic Boundary Size (PFail): 0 00:15:47.651 Atomic Boundary Offset: 0 00:15:47.651 Maximum Single Source Range Length: 65535 00:15:47.651 Maximum Copy Length: 65535 00:15:47.651 Maximum Source Range Count: 1 00:15:47.651 NGUID/EUI64 Never Reused: No 00:15:47.651 Namespace Write Protected: No 00:15:47.651 Number of LBA Formats: 1 00:15:47.651 Current LBA Format: LBA Format #00 00:15:47.651 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:47.651 00:15:47.651 23:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:47.651 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.651 [2024-07-24 23:04:05.377777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.938 Initializing NVMe Controllers 00:15:52.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:52.938 Initialization complete. Launching workers. 00:15:52.938 ======================================================== 00:15:52.938 Latency(us) 00:15:52.938 Device Information : IOPS MiB/s Average min max 00:15:52.938 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40055.80 156.47 3197.93 841.83 7705.35 00:15:52.938 ======================================================== 00:15:52.938 Total : 40055.80 156.47 3197.93 841.83 7705.35 00:15:52.938 00:15:52.938 [2024-07-24 23:04:10.486940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.938 23:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:52.938 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.938 [2024-07-24 23:04:10.665491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.287 Initializing NVMe Controllers 00:15:58.287 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.287 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:58.287 Initialization complete. Launching workers. 00:15:58.287 ======================================================== 00:15:58.287 Latency(us) 00:15:58.287 Device Information : IOPS MiB/s Average min max 00:15:58.287 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35487.86 138.62 3606.36 1102.90 6683.33 00:15:58.287 ======================================================== 00:15:58.287 Total : 35487.86 138.62 3606.36 1102.90 6683.33 00:15:58.287 00:15:58.287 [2024-07-24 23:04:15.688620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.287 23:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:58.287 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.287 [2024-07-24 23:04:15.871744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.584 [2024-07-24 23:04:21.007828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.584 Initializing NVMe Controllers 00:16:03.584 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:03.584 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:03.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:03.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:03.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:03.584 Initialization complete. Launching workers. 00:16:03.584 Starting thread on core 2 00:16:03.584 Starting thread on core 3 00:16:03.584 Starting thread on core 1 00:16:03.584 23:04:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:03.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.584 [2024-07-24 23:04:21.280233] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.879 [2024-07-24 23:04:24.328071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.879 Initializing NVMe Controllers 00:16:06.879 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.879 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.879 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:06.879 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:06.879 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:06.879 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:06.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:06.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:06.879 Initialization complete. Launching workers. 00:16:06.879 Starting thread on core 1 with urgent priority queue 00:16:06.879 Starting thread on core 2 with urgent priority queue 00:16:06.879 Starting thread on core 3 with urgent priority queue 00:16:06.879 Starting thread on core 0 with urgent priority queue 00:16:06.879 SPDK bdev Controller (SPDK2 ) core 0: 14412.67 IO/s 6.94 secs/100000 ios 00:16:06.879 SPDK bdev Controller (SPDK2 ) core 1: 8004.67 IO/s 12.49 secs/100000 ios 00:16:06.879 SPDK bdev Controller (SPDK2 ) core 2: 17709.67 IO/s 5.65 secs/100000 ios 00:16:06.879 SPDK bdev Controller (SPDK2 ) core 3: 10378.33 IO/s 9.64 secs/100000 ios 00:16:06.879 ======================================================== 00:16:06.879 00:16:06.879 23:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:06.879 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.879 [2024-07-24 23:04:24.599231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.879 Initializing NVMe Controllers 00:16:06.879 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.879 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.879 Namespace ID: 1 size: 0GB 00:16:06.879 Initialization complete. 00:16:06.879 INFO: using host memory buffer for IO 00:16:06.879 Hello world! 00:16:06.879 [2024-07-24 23:04:24.607279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.879 23:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:07.140 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.140 [2024-07-24 23:04:24.879198] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:08.527 Initializing NVMe Controllers 00:16:08.527 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:08.527 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:08.527 Initialization complete. Launching workers. 00:16:08.527 submit (in ns) avg, min, max = 9652.5, 3898.3, 4005396.7 00:16:08.527 complete (in ns) avg, min, max = 16373.1, 2375.8, 4130445.8 00:16:08.527 00:16:08.527 Submit histogram 00:16:08.527 ================ 00:16:08.527 Range in us Cumulative Count 00:16:08.527 3.893 - 3.920: 1.0215% ( 197) 00:16:08.527 3.920 - 3.947: 5.5533% ( 874) 00:16:08.527 3.947 - 3.973: 13.9791% ( 1625) 00:16:08.527 3.973 - 4.000: 25.2463% ( 2173) 00:16:08.527 4.000 - 4.027: 36.3424% ( 2140) 00:16:08.527 4.027 - 4.053: 48.1956% ( 2286) 00:16:08.527 4.053 - 4.080: 64.8813% ( 3218) 00:16:08.527 4.080 - 4.107: 80.9551% ( 3100) 00:16:08.527 4.107 - 4.133: 91.7712% ( 2086) 00:16:08.527 4.133 - 4.160: 96.9097% ( 991) 00:16:08.527 4.160 - 4.187: 98.6156% ( 329) 00:16:08.527 4.187 - 4.213: 99.1600% ( 105) 00:16:08.527 4.213 - 4.240: 99.3726% ( 41) 00:16:08.527 4.240 - 4.267: 99.3933% ( 4) 00:16:08.527 4.267 - 4.293: 99.4348% ( 8) 00:16:08.527 4.293 - 4.320: 99.4400% ( 1) 00:16:08.527 4.320 - 4.347: 99.4504% ( 2) 00:16:08.527 4.480 - 4.507: 99.4556% ( 1) 00:16:08.527 4.693 - 4.720: 99.4607% ( 1) 00:16:08.527 4.827 - 4.853: 99.4659% ( 1) 00:16:08.527 4.933 - 4.960: 99.4711% ( 1) 00:16:08.527 5.013 - 5.040: 99.4763% ( 1) 00:16:08.527 5.253 - 5.280: 99.4815% ( 1) 00:16:08.527 5.280 - 5.307: 99.4867% ( 1) 00:16:08.527 5.387 - 5.413: 99.4919% ( 1) 00:16:08.527 5.627 - 5.653: 99.4970% ( 1) 00:16:08.527 5.733 - 5.760: 99.5022% ( 1) 00:16:08.527 5.867 - 5.893: 99.5074% ( 1) 00:16:08.527 5.947 - 5.973: 99.5126% ( 1) 00:16:08.527 6.000 - 6.027: 99.5178% ( 1) 00:16:08.527 6.027 - 6.053: 99.5230% ( 1) 00:16:08.527 6.107 - 6.133: 99.5282% ( 1) 00:16:08.527 6.133 - 6.160: 99.5437% ( 3) 00:16:08.527 6.160 - 6.187: 99.5489% ( 1) 00:16:08.527 6.187 - 6.213: 99.5593% ( 2) 00:16:08.527 6.240 - 6.267: 99.5645% ( 1) 00:16:08.527 6.347 - 6.373: 99.5800% ( 3) 00:16:08.527 6.373 - 6.400: 99.5852% ( 1) 00:16:08.527 6.427 - 6.453: 99.5956% ( 2) 00:16:08.527 6.507 - 6.533: 99.6007% ( 1) 00:16:08.527 6.560 - 6.587: 99.6163% ( 3) 00:16:08.527 6.587 - 6.613: 99.6215% ( 1) 00:16:08.527 6.613 - 6.640: 99.6319% ( 2) 00:16:08.527 6.640 - 6.667: 99.6370% ( 1) 00:16:08.527 6.667 - 6.693: 99.6474% ( 2) 00:16:08.527 6.693 - 6.720: 99.6526% ( 1) 00:16:08.527 6.720 - 6.747: 99.6733% ( 4) 00:16:08.527 6.747 - 6.773: 99.6837% ( 2) 00:16:08.527 6.773 - 6.800: 99.6889% ( 1) 00:16:08.527 6.827 - 6.880: 99.7044% ( 3) 00:16:08.527 6.880 - 6.933: 99.7148% ( 2) 00:16:08.527 6.933 - 6.987: 99.7252% ( 2) 00:16:08.527 6.987 - 7.040: 99.7304% ( 1) 00:16:08.527 7.040 - 7.093: 99.7407% ( 2) 00:16:08.527 7.147 - 7.200: 99.7511% ( 2) 00:16:08.527 7.200 - 7.253: 99.7667% ( 3) 00:16:08.527 7.307 - 7.360: 99.7822% ( 3) 00:16:08.527 7.360 - 7.413: 99.7978% ( 3) 00:16:08.527 7.413 - 7.467: 99.8133% ( 3) 00:16:08.527 7.467 - 7.520: 99.8237% ( 2) 00:16:08.527 7.520 - 7.573: 99.8289% ( 1) 00:16:08.527 7.733 - 7.787: 99.8341% ( 1) 00:16:08.527 7.893 - 7.947: 99.8393% ( 1) 00:16:08.527 7.947 - 8.000: 99.8444% ( 1) 00:16:08.527 8.000 - 8.053: 99.8496% ( 1) 00:16:08.527 8.107 - 8.160: 99.8548% ( 1) 00:16:08.527 8.960 - 9.013: 99.8600% ( 1) 00:16:08.527 3986.773 - 4014.080: 100.0000% ( 27) 00:16:08.527 00:16:08.527 Complete histogram 00:16:08.527 ================== 00:16:08.527 Range in us Cumulative Count 00:16:08.527 2.373 - 2.387: 0.0052% ( 1) 00:16:08.527 2.387 - 2.400: 0.0415% ( 7) 00:16:08.527 2.400 - 2.413: 0.9748% ( 180) 00:16:08.527 2.413 - 2.427: 1.0526% ( 15) 00:16:08.527 2.427 - 2.440: 1.3585% ( 59) 00:16:08.527 2.440 - 2.453: 39.6816% ( 7391) 00:16:08.527 2.453 - 2.467: 51.9185% ( 2360) 00:16:08.527 2.467 - 2.480: 70.4241% ( 3569) 00:16:08.527 2.480 - 2.493: 78.4662% ( 1551) 00:16:08.527 2.493 - [2024-07-24 23:04:25.974422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:08.527 2.507: 81.3699% ( 560) 00:16:08.527 2.507 - 2.520: 84.5276% ( 609) 00:16:08.527 2.520 - 2.533: 89.8527% ( 1027) 00:16:08.527 2.533 - 2.547: 94.6593% ( 927) 00:16:08.527 2.547 - 2.560: 97.3608% ( 521) 00:16:08.527 2.560 - 2.573: 98.7867% ( 275) 00:16:08.527 2.573 - 2.587: 99.2948% ( 98) 00:16:08.527 2.587 - 2.600: 99.4245% ( 25) 00:16:08.527 2.600 - 2.613: 99.4400% ( 3) 00:16:08.527 2.613 - 2.627: 99.4504% ( 2) 00:16:08.527 2.680 - 2.693: 99.4556% ( 1) 00:16:08.527 4.320 - 4.347: 99.4607% ( 1) 00:16:08.527 4.640 - 4.667: 99.4659% ( 1) 00:16:08.527 4.747 - 4.773: 99.4711% ( 1) 00:16:08.527 4.853 - 4.880: 99.4815% ( 2) 00:16:08.527 4.907 - 4.933: 99.4919% ( 2) 00:16:08.527 4.933 - 4.960: 99.4970% ( 1) 00:16:08.527 5.013 - 5.040: 99.5022% ( 1) 00:16:08.527 5.040 - 5.067: 99.5074% ( 1) 00:16:08.528 5.067 - 5.093: 99.5126% ( 1) 00:16:08.528 5.120 - 5.147: 99.5178% ( 1) 00:16:08.528 5.173 - 5.200: 99.5230% ( 1) 00:16:08.528 5.200 - 5.227: 99.5333% ( 2) 00:16:08.528 5.227 - 5.253: 99.5385% ( 1) 00:16:08.528 5.253 - 5.280: 99.5437% ( 1) 00:16:08.528 5.307 - 5.333: 99.5489% ( 1) 00:16:08.528 5.333 - 5.360: 99.5541% ( 1) 00:16:08.528 5.387 - 5.413: 99.5696% ( 3) 00:16:08.528 5.413 - 5.440: 99.5852% ( 3) 00:16:08.528 5.547 - 5.573: 99.5904% ( 1) 00:16:08.528 5.573 - 5.600: 99.5956% ( 1) 00:16:08.528 5.653 - 5.680: 99.6111% ( 3) 00:16:08.528 5.840 - 5.867: 99.6163% ( 1) 00:16:08.528 5.893 - 5.920: 99.6215% ( 1) 00:16:08.528 6.213 - 6.240: 99.6267% ( 1) 00:16:08.528 6.267 - 6.293: 99.6319% ( 1) 00:16:08.528 6.827 - 6.880: 99.6370% ( 1) 00:16:08.528 32.213 - 32.427: 99.6422% ( 1) 00:16:08.528 47.360 - 47.573: 99.6474% ( 1) 00:16:08.528 49.493 - 49.707: 99.6526% ( 1) 00:16:08.528 3986.773 - 4014.080: 99.9948% ( 66) 00:16:08.528 4123.307 - 4150.613: 100.0000% ( 1) 00:16:08.528 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:08.528 [ 00:16:08.528 { 00:16:08.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:08.528 "subtype": "Discovery", 00:16:08.528 "listen_addresses": [], 00:16:08.528 "allow_any_host": true, 00:16:08.528 "hosts": [] 00:16:08.528 }, 00:16:08.528 { 00:16:08.528 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:08.528 "subtype": "NVMe", 00:16:08.528 "listen_addresses": [ 00:16:08.528 { 00:16:08.528 "trtype": "VFIOUSER", 00:16:08.528 "adrfam": "IPv4", 00:16:08.528 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:08.528 "trsvcid": "0" 00:16:08.528 } 00:16:08.528 ], 00:16:08.528 "allow_any_host": true, 00:16:08.528 "hosts": [], 00:16:08.528 "serial_number": "SPDK1", 00:16:08.528 "model_number": "SPDK bdev Controller", 00:16:08.528 "max_namespaces": 32, 00:16:08.528 "min_cntlid": 1, 00:16:08.528 "max_cntlid": 65519, 00:16:08.528 "namespaces": [ 00:16:08.528 { 00:16:08.528 "nsid": 1, 00:16:08.528 "bdev_name": "Malloc1", 00:16:08.528 "name": "Malloc1", 00:16:08.528 "nguid": "6C74304015D34B3F8AD07DC985559B39", 00:16:08.528 "uuid": "6c743040-15d3-4b3f-8ad0-7dc985559b39" 00:16:08.528 }, 00:16:08.528 { 00:16:08.528 "nsid": 2, 00:16:08.528 "bdev_name": "Malloc3", 00:16:08.528 "name": "Malloc3", 00:16:08.528 "nguid": "CBC4C7521B42468DA27179996E138279", 00:16:08.528 "uuid": "cbc4c752-1b42-468d-a271-79996e138279" 00:16:08.528 } 00:16:08.528 ] 00:16:08.528 }, 00:16:08.528 { 00:16:08.528 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:08.528 "subtype": "NVMe", 00:16:08.528 "listen_addresses": [ 00:16:08.528 { 00:16:08.528 "trtype": "VFIOUSER", 00:16:08.528 "adrfam": "IPv4", 00:16:08.528 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:08.528 "trsvcid": "0" 00:16:08.528 } 00:16:08.528 ], 00:16:08.528 "allow_any_host": true, 00:16:08.528 "hosts": [], 00:16:08.528 "serial_number": "SPDK2", 00:16:08.528 "model_number": "SPDK bdev Controller", 00:16:08.528 "max_namespaces": 32, 00:16:08.528 "min_cntlid": 1, 00:16:08.528 "max_cntlid": 65519, 00:16:08.528 "namespaces": [ 00:16:08.528 { 00:16:08.528 "nsid": 1, 00:16:08.528 "bdev_name": "Malloc2", 00:16:08.528 "name": "Malloc2", 00:16:08.528 "nguid": "F1644C4D66584508A8C691F6B8E8B614", 00:16:08.528 "uuid": "f1644c4d-6658-4508-a8c6-91f6b8e8b614" 00:16:08.528 } 00:16:08.528 ] 00:16:08.528 } 00:16:08.528 ] 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=826703 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:08.528 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:08.528 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.789 Malloc4 00:16:08.789 [2024-07-24 23:04:26.359113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:08.789 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:08.789 [2024-07-24 23:04:26.529205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:08.789 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:08.789 Asynchronous Event Request test 00:16:08.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:08.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:08.789 Registering asynchronous event callbacks... 00:16:08.789 Starting namespace attribute notice tests for all controllers... 00:16:08.789 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:08.789 aer_cb - Changed Namespace 00:16:08.789 Cleaning up... 00:16:09.051 [ 00:16:09.051 { 00:16:09.051 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:09.051 "subtype": "Discovery", 00:16:09.051 "listen_addresses": [], 00:16:09.051 "allow_any_host": true, 00:16:09.051 "hosts": [] 00:16:09.051 }, 00:16:09.051 { 00:16:09.051 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:09.051 "subtype": "NVMe", 00:16:09.051 "listen_addresses": [ 00:16:09.051 { 00:16:09.051 "trtype": "VFIOUSER", 00:16:09.051 "adrfam": "IPv4", 00:16:09.051 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:09.051 "trsvcid": "0" 00:16:09.051 } 00:16:09.051 ], 00:16:09.051 "allow_any_host": true, 00:16:09.051 "hosts": [], 00:16:09.051 "serial_number": "SPDK1", 00:16:09.051 "model_number": "SPDK bdev Controller", 00:16:09.051 "max_namespaces": 32, 00:16:09.051 "min_cntlid": 1, 00:16:09.051 "max_cntlid": 65519, 00:16:09.051 "namespaces": [ 00:16:09.051 { 00:16:09.051 "nsid": 1, 00:16:09.051 "bdev_name": "Malloc1", 00:16:09.051 "name": "Malloc1", 00:16:09.051 "nguid": "6C74304015D34B3F8AD07DC985559B39", 00:16:09.051 "uuid": "6c743040-15d3-4b3f-8ad0-7dc985559b39" 00:16:09.051 }, 00:16:09.051 { 00:16:09.051 "nsid": 2, 00:16:09.051 "bdev_name": "Malloc3", 00:16:09.051 "name": "Malloc3", 00:16:09.051 "nguid": "CBC4C7521B42468DA27179996E138279", 00:16:09.051 "uuid": "cbc4c752-1b42-468d-a271-79996e138279" 00:16:09.051 } 00:16:09.051 ] 00:16:09.051 }, 00:16:09.051 { 00:16:09.051 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:09.051 "subtype": "NVMe", 00:16:09.051 "listen_addresses": [ 00:16:09.051 { 00:16:09.051 "trtype": "VFIOUSER", 00:16:09.051 "adrfam": "IPv4", 00:16:09.051 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:09.051 "trsvcid": "0" 00:16:09.051 } 00:16:09.051 ], 00:16:09.051 "allow_any_host": true, 00:16:09.051 "hosts": [], 00:16:09.051 "serial_number": "SPDK2", 00:16:09.051 "model_number": "SPDK bdev Controller", 00:16:09.051 "max_namespaces": 32, 00:16:09.051 "min_cntlid": 1, 00:16:09.051 "max_cntlid": 65519, 00:16:09.051 "namespaces": [ 00:16:09.051 { 00:16:09.051 "nsid": 1, 00:16:09.051 "bdev_name": "Malloc2", 00:16:09.051 "name": "Malloc2", 00:16:09.051 "nguid": "F1644C4D66584508A8C691F6B8E8B614", 00:16:09.051 "uuid": "f1644c4d-6658-4508-a8c6-91f6b8e8b614" 00:16:09.051 }, 00:16:09.051 { 00:16:09.051 "nsid": 2, 00:16:09.051 "bdev_name": "Malloc4", 00:16:09.051 "name": "Malloc4", 00:16:09.051 "nguid": "1E72D404396A40A5875B52DFBA121BB9", 00:16:09.051 "uuid": "1e72d404-396a-40a5-875b-52dfba121bb9" 00:16:09.051 } 00:16:09.051 ] 00:16:09.051 } 00:16:09.051 ] 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 826703 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 817609 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 817609 ']' 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 817609 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 817609 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 817609' 00:16:09.051 killing process with pid 817609 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 817609 00:16:09.051 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 817609 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=826752 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 826752' 00:16:09.312 Process pid: 826752 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 826752 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 826752 ']' 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.312 23:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:09.312 [2024-07-24 23:04:26.995854] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:09.312 [2024-07-24 23:04:26.996812] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:09.312 [2024-07-24 23:04:26.996855] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.312 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.312 [2024-07-24 23:04:27.064560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.574 [2024-07-24 23:04:27.130014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.574 [2024-07-24 23:04:27.130053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.574 [2024-07-24 23:04:27.130060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.574 [2024-07-24 23:04:27.130071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.574 [2024-07-24 23:04:27.130077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.574 [2024-07-24 23:04:27.130217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.574 [2024-07-24 23:04:27.130327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.574 [2024-07-24 23:04:27.130481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.574 [2024-07-24 23:04:27.130483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.574 [2024-07-24 23:04:27.194449] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:09.574 [2024-07-24 23:04:27.194464] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:09.574 [2024-07-24 23:04:27.195517] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:09.574 [2024-07-24 23:04:27.195789] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:09.574 [2024-07-24 23:04:27.195921] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:10.145 23:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.145 23:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:10.145 23:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:11.086 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:11.347 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:11.347 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:11.347 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.347 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:11.347 23:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:11.347 Malloc1 00:16:11.347 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:11.608 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:11.870 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:11.870 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.870 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:11.870 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:12.130 Malloc2 00:16:12.130 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:12.390 23:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:12.390 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 826752 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 826752 ']' 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 826752 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 826752 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 826752' 00:16:12.651 killing process with pid 826752 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 826752 00:16:12.651 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 826752 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:12.913 00:16:12.913 real 0m50.589s 00:16:12.913 user 3m20.443s 00:16:12.913 sys 0m3.040s 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:12.913 ************************************ 00:16:12.913 END TEST nvmf_vfio_user 00:16:12.913 ************************************ 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.913 ************************************ 00:16:12.913 START TEST nvmf_vfio_user_nvme_compliance 00:16:12.913 ************************************ 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:12.913 * Looking for test storage... 00:16:12.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.913 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.174 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=827641 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 827641' 00:16:13.175 Process pid: 827641 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 827641 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 827641 ']' 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.175 23:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:13.175 [2024-07-24 23:04:30.764180] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:16:13.175 [2024-07-24 23:04:30.764250] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.175 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.175 [2024-07-24 23:04:30.836953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:13.175 [2024-07-24 23:04:30.911760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.175 [2024-07-24 23:04:30.911802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.175 [2024-07-24 23:04:30.911810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.175 [2024-07-24 23:04:30.911816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.175 [2024-07-24 23:04:30.911822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.175 [2024-07-24 23:04:30.911887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.175 [2024-07-24 23:04:30.912003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.175 [2024-07-24 23:04:30.912005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.115 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.115 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:14.115 23:04:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.057 malloc0 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:15.057 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.058 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.058 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.058 23:04:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:15.058 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.058 00:16:15.058 00:16:15.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:15.058 http://cunit.sourceforge.net/ 00:16:15.058 00:16:15.058 00:16:15.058 Suite: nvme_compliance 00:16:15.058 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 23:04:32.808211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.058 [2024-07-24 23:04:32.809542] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:15.058 [2024-07-24 23:04:32.809553] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:15.058 [2024-07-24 23:04:32.809557] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:15.058 [2024-07-24 23:04:32.811231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.320 passed 00:16:15.320 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 23:04:32.907829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.320 [2024-07-24 23:04:32.910854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.320 passed 00:16:15.320 Test: admin_identify_ns ...[2024-07-24 23:04:33.005002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.320 [2024-07-24 23:04:33.068762] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:15.320 [2024-07-24 23:04:33.076763] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:15.320 [2024-07-24 23:04:33.097877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.584 passed 00:16:15.584 Test: admin_get_features_mandatory_features ...[2024-07-24 23:04:33.189507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.584 [2024-07-24 23:04:33.192526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.584 passed 00:16:15.584 Test: admin_get_features_optional_features ...[2024-07-24 23:04:33.286094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.584 [2024-07-24 23:04:33.289121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.584 passed 00:16:15.843 Test: admin_set_features_number_of_queues ...[2024-07-24 23:04:33.383279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.843 [2024-07-24 23:04:33.487862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.843 passed 00:16:15.843 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 23:04:33.579879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.843 [2024-07-24 23:04:33.582899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.843 passed 00:16:16.104 Test: admin_get_log_page_with_lpo ...[2024-07-24 23:04:33.678012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.104 [2024-07-24 23:04:33.745763] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:16.104 [2024-07-24 23:04:33.758820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.104 passed 00:16:16.104 Test: fabric_property_get ...[2024-07-24 23:04:33.850462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.104 [2024-07-24 23:04:33.851705] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:16.104 [2024-07-24 23:04:33.853472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.104 passed 00:16:16.365 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 23:04:33.948020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.365 [2024-07-24 23:04:33.949278] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:16.365 [2024-07-24 23:04:33.951041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.365 passed 00:16:16.365 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 23:04:34.042208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.365 [2024-07-24 23:04:34.129763] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:16.365 [2024-07-24 23:04:34.145762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:16.365 [2024-07-24 23:04:34.150851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.626 passed 00:16:16.626 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 23:04:34.241479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.626 [2024-07-24 23:04:34.242729] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:16.626 [2024-07-24 23:04:34.244497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.626 passed 00:16:16.626 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 23:04:34.337005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.886 [2024-07-24 23:04:34.413765] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.886 [2024-07-24 23:04:34.437762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:16.886 [2024-07-24 23:04:34.442840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.886 passed 00:16:16.886 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 23:04:34.536517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.886 [2024-07-24 23:04:34.537763] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:16.886 [2024-07-24 23:04:34.537787] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:16.886 [2024-07-24 23:04:34.539532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.886 passed 00:16:16.886 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 23:04:34.632624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.147 [2024-07-24 23:04:34.723769] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:17.147 [2024-07-24 23:04:34.731763] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:17.147 [2024-07-24 23:04:34.739761] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:17.147 [2024-07-24 23:04:34.747759] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:17.147 [2024-07-24 23:04:34.776852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.147 passed 00:16:17.147 Test: admin_create_io_sq_verify_pc ...[2024-07-24 23:04:34.868846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.147 [2024-07-24 23:04:34.886767] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:17.147 [2024-07-24 23:04:34.904027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.408 passed 00:16:17.408 Test: admin_create_io_qp_max_qps ...[2024-07-24 23:04:34.994547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.351 [2024-07-24 23:04:36.103764] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:18.923 [2024-07-24 23:04:36.497698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.923 passed 00:16:18.923 Test: admin_create_io_sq_shared_cq ...[2024-07-24 23:04:36.589825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.184 [2024-07-24 23:04:36.722759] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:19.184 [2024-07-24 23:04:36.759827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.184 passed 00:16:19.184 00:16:19.184 Run Summary: Type Total Ran Passed Failed Inactive 00:16:19.184 suites 1 1 n/a 0 0 00:16:19.184 tests 18 18 18 0 0 00:16:19.184 asserts 360 360 360 0 n/a 00:16:19.184 00:16:19.184 Elapsed time = 1.658 seconds 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 827641 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 827641 ']' 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 827641 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 827641 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 827641' 00:16:19.184 killing process with pid 827641 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 827641 00:16:19.184 23:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 827641 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:19.445 00:16:19.445 real 0m6.439s 00:16:19.445 user 0m18.406s 00:16:19.445 sys 0m0.486s 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.445 ************************************ 00:16:19.445 END TEST nvmf_vfio_user_nvme_compliance 00:16:19.445 ************************************ 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:19.445 23:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.446 ************************************ 00:16:19.446 START TEST nvmf_vfio_user_fuzz 00:16:19.446 ************************************ 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:19.446 * Looking for test storage... 00:16:19.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=828869 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 828869' 00:16:19.446 Process pid: 828869 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 828869 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 828869 ']' 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.446 23:04:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.419 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.419 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:20.419 23:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 malloc0 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:21.362 23:04:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:53.487 Fuzzing completed. Shutting down the fuzz application 00:16:53.488 00:16:53.488 Dumping successful admin opcodes: 00:16:53.488 8, 9, 10, 24, 00:16:53.488 Dumping successful io opcodes: 00:16:53.488 0, 00:16:53.488 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1252750, total successful commands: 4917, random_seed: 4084390336 00:16:53.488 NS: 0x200003a1ef00 admin qp, Total commands completed: 157450, total successful commands: 1266, random_seed: 2402608256 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 828869 ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 828869' 00:16:53.488 killing process with pid 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 828869 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:53.488 00:16:53.488 real 0m33.689s 00:16:53.488 user 0m40.681s 00:16:53.488 sys 0m23.906s 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:53.488 ************************************ 00:16:53.488 END TEST nvmf_vfio_user_fuzz 00:16:53.488 ************************************ 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:53.488 ************************************ 00:16:53.488 START TEST nvmf_auth_target 00:16:53.488 ************************************ 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:53.488 * Looking for test storage... 00:16:53.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.488 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.489 23:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.634 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:01.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:01.635 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:01.635 Found net devices under 0000:31:00.0: cvl_0_0 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:01.635 Found net devices under 0000:31:00.1: cvl_0_1 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.635 23:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:17:01.635 00:17:01.635 --- 10.0.0.2 ping statistics --- 00:17:01.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.635 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:17:01.635 00:17:01.635 --- 10.0.0.1 ping statistics --- 00:17:01.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.635 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=839640 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 839640 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 839640 ']' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.635 23:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=839867 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5363a208ff697e40e0912ef7cf6fe1778f544dd3fd049a06 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.a0A 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5363a208ff697e40e0912ef7cf6fe1778f544dd3fd049a06 0 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5363a208ff697e40e0912ef7cf6fe1778f544dd3fd049a06 0 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5363a208ff697e40e0912ef7cf6fe1778f544dd3fd049a06 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.a0A 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.a0A 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.a0A 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5bf4d706cf82e1d943e5f899acdf19c371d639e55f7061991fc0167fb5a4ff8e 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.056 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5bf4d706cf82e1d943e5f899acdf19c371d639e55f7061991fc0167fb5a4ff8e 3 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5bf4d706cf82e1d943e5f899acdf19c371d639e55f7061991fc0167fb5a4ff8e 3 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5bf4d706cf82e1d943e5f899acdf19c371d639e55f7061991fc0167fb5a4ff8e 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.056 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.056 00:17:02.577 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.056 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b45d7c8301118dd6c714d6176a864c14 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.n62 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b45d7c8301118dd6c714d6176a864c14 1 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b45d7c8301118dd6c714d6176a864c14 1 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b45d7c8301118dd6c714d6176a864c14 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.n62 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.n62 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.n62 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01b810fe343f30448d5b6d04ae93623c537365940118dd97 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FLt 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01b810fe343f30448d5b6d04ae93623c537365940118dd97 2 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01b810fe343f30448d5b6d04ae93623c537365940118dd97 2 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01b810fe343f30448d5b6d04ae93623c537365940118dd97 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:02.578 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.839 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FLt 00:17:02.839 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FLt 00:17:02.839 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.FLt 00:17:02.839 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0fdc20e55283a322156bb019efd8a043ad4f4e65f9ae992 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gE8 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0fdc20e55283a322156bb019efd8a043ad4f4e65f9ae992 2 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0fdc20e55283a322156bb019efd8a043ad4f4e65f9ae992 2 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0fdc20e55283a322156bb019efd8a043ad4f4e65f9ae992 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gE8 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gE8 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.gE8 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b368d20f171e7065d0461d3cd250a211 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Uaj 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b368d20f171e7065d0461d3cd250a211 1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b368d20f171e7065d0461d3cd250a211 1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b368d20f171e7065d0461d3cd250a211 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Uaj 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Uaj 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Uaj 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=99d74ce98b3b1b289c06ce779fb70576a58bb3e3bc21bcb4b6689fe2b324ffc9 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yzp 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 99d74ce98b3b1b289c06ce779fb70576a58bb3e3bc21bcb4b6689fe2b324ffc9 3 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 99d74ce98b3b1b289c06ce779fb70576a58bb3e3bc21bcb4b6689fe2b324ffc9 3 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=99d74ce98b3b1b289c06ce779fb70576a58bb3e3bc21bcb4b6689fe2b324ffc9 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yzp 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yzp 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.yzp 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 839640 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 839640 ']' 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.840 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 839867 /var/tmp/host.sock 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 839867 ']' 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:03.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.101 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a0A 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.a0A 00:17:03.362 23:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.a0A 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.056 ]] 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.056 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.056 00:17:03.362 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.056 00:17:03.622 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.n62 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.n62 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.n62 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.FLt ]] 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FLt 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FLt 00:17:03.623 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FLt 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gE8 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gE8 00:17:03.884 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gE8 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Uaj ]] 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uaj 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uaj 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Uaj 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yzp 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yzp 00:17:04.145 23:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yzp 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:04.406 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.667 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.667 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.928 { 00:17:04.928 "cntlid": 1, 00:17:04.928 "qid": 0, 00:17:04.928 "state": "enabled", 00:17:04.928 "thread": "nvmf_tgt_poll_group_000", 00:17:04.928 "listen_address": { 00:17:04.928 "trtype": "TCP", 00:17:04.928 "adrfam": "IPv4", 00:17:04.928 "traddr": "10.0.0.2", 00:17:04.928 "trsvcid": "4420" 00:17:04.928 }, 00:17:04.928 "peer_address": { 00:17:04.928 "trtype": "TCP", 00:17:04.928 "adrfam": "IPv4", 00:17:04.928 "traddr": "10.0.0.1", 00:17:04.928 "trsvcid": "59962" 00:17:04.928 }, 00:17:04.928 "auth": { 00:17:04.928 "state": "completed", 00:17:04.928 "digest": "sha256", 00:17:04.928 "dhgroup": "null" 00:17:04.928 } 00:17:04.928 } 00:17:04.928 ]' 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.928 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.189 23:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.137 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.398 00:17:06.398 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.398 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.398 23:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.398 { 00:17:06.398 "cntlid": 3, 00:17:06.398 "qid": 0, 00:17:06.398 "state": "enabled", 00:17:06.398 "thread": "nvmf_tgt_poll_group_000", 00:17:06.398 "listen_address": { 00:17:06.398 "trtype": "TCP", 00:17:06.398 "adrfam": "IPv4", 00:17:06.398 "traddr": "10.0.0.2", 00:17:06.398 "trsvcid": "4420" 00:17:06.398 }, 00:17:06.398 "peer_address": { 00:17:06.398 "trtype": "TCP", 00:17:06.398 "adrfam": "IPv4", 00:17:06.398 "traddr": "10.0.0.1", 00:17:06.398 "trsvcid": "44360" 00:17:06.398 }, 00:17:06.398 "auth": { 00:17:06.398 "state": "completed", 00:17:06.398 "digest": "sha256", 00:17:06.398 "dhgroup": "null" 00:17:06.398 } 00:17:06.398 } 00:17:06.398 ]' 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.398 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.659 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:06.659 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.659 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.659 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.659 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.920 23:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.492 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.753 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.753 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.014 { 00:17:08.014 "cntlid": 5, 00:17:08.014 "qid": 0, 00:17:08.014 "state": "enabled", 00:17:08.014 "thread": "nvmf_tgt_poll_group_000", 00:17:08.014 "listen_address": { 00:17:08.014 "trtype": "TCP", 00:17:08.014 "adrfam": "IPv4", 00:17:08.014 "traddr": "10.0.0.2", 00:17:08.014 "trsvcid": "4420" 00:17:08.014 }, 00:17:08.014 "peer_address": { 00:17:08.014 "trtype": "TCP", 00:17:08.014 "adrfam": "IPv4", 00:17:08.014 "traddr": "10.0.0.1", 00:17:08.014 "trsvcid": "44400" 00:17:08.014 }, 00:17:08.014 "auth": { 00:17:08.014 "state": "completed", 00:17:08.014 "digest": "sha256", 00:17:08.014 "dhgroup": "null" 00:17:08.014 } 00:17:08.014 } 00:17:08.014 ]' 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.014 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.275 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.275 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.275 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.275 23:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.289 23:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.289 00:17:09.289 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.289 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.289 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.550 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.550 { 00:17:09.550 "cntlid": 7, 00:17:09.550 "qid": 0, 00:17:09.550 "state": "enabled", 00:17:09.550 "thread": "nvmf_tgt_poll_group_000", 00:17:09.550 "listen_address": { 00:17:09.550 "trtype": "TCP", 00:17:09.550 "adrfam": "IPv4", 00:17:09.550 "traddr": "10.0.0.2", 00:17:09.550 "trsvcid": "4420" 00:17:09.550 }, 00:17:09.550 "peer_address": { 00:17:09.550 "trtype": "TCP", 00:17:09.550 "adrfam": "IPv4", 00:17:09.550 "traddr": "10.0.0.1", 00:17:09.550 "trsvcid": "44426" 00:17:09.550 }, 00:17:09.550 "auth": { 00:17:09.550 "state": "completed", 00:17:09.550 "digest": "sha256", 00:17:09.551 "dhgroup": "null" 00:17:09.551 } 00:17:09.551 } 00:17:09.551 ]' 00:17:09.551 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.551 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.551 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.551 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.551 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.811 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.811 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.811 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.811 23:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:10.752 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.753 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.013 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.013 { 00:17:11.013 "cntlid": 9, 00:17:11.013 "qid": 0, 00:17:11.013 "state": "enabled", 00:17:11.013 "thread": "nvmf_tgt_poll_group_000", 00:17:11.013 "listen_address": { 00:17:11.013 "trtype": "TCP", 00:17:11.013 "adrfam": "IPv4", 00:17:11.013 "traddr": "10.0.0.2", 00:17:11.013 "trsvcid": "4420" 00:17:11.013 }, 00:17:11.013 "peer_address": { 00:17:11.013 "trtype": "TCP", 00:17:11.013 "adrfam": "IPv4", 00:17:11.013 "traddr": "10.0.0.1", 00:17:11.013 "trsvcid": "44448" 00:17:11.013 }, 00:17:11.013 "auth": { 00:17:11.013 "state": "completed", 00:17:11.013 "digest": "sha256", 00:17:11.013 "dhgroup": "ffdhe2048" 00:17:11.013 } 00:17:11.013 } 00:17:11.013 ]' 00:17:11.013 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.273 23:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.532 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.101 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.362 23:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.623 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.623 { 00:17:12.623 "cntlid": 11, 00:17:12.623 "qid": 0, 00:17:12.623 "state": "enabled", 00:17:12.623 "thread": "nvmf_tgt_poll_group_000", 00:17:12.623 "listen_address": { 00:17:12.623 "trtype": "TCP", 00:17:12.623 "adrfam": "IPv4", 00:17:12.623 "traddr": "10.0.0.2", 00:17:12.623 "trsvcid": "4420" 00:17:12.623 }, 00:17:12.623 "peer_address": { 00:17:12.623 "trtype": "TCP", 00:17:12.623 "adrfam": "IPv4", 00:17:12.623 "traddr": "10.0.0.1", 00:17:12.623 "trsvcid": "44476" 00:17:12.623 }, 00:17:12.623 "auth": { 00:17:12.623 "state": "completed", 00:17:12.623 "digest": "sha256", 00:17:12.623 "dhgroup": "ffdhe2048" 00:17:12.623 } 00:17:12.623 } 00:17:12.623 ]' 00:17:12.623 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.883 23:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.823 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.083 00:17:14.083 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.083 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.083 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.343 { 00:17:14.343 "cntlid": 13, 00:17:14.343 "qid": 0, 00:17:14.343 "state": "enabled", 00:17:14.343 "thread": "nvmf_tgt_poll_group_000", 00:17:14.343 "listen_address": { 00:17:14.343 "trtype": "TCP", 00:17:14.343 "adrfam": "IPv4", 00:17:14.343 "traddr": "10.0.0.2", 00:17:14.343 "trsvcid": "4420" 00:17:14.343 }, 00:17:14.343 "peer_address": { 00:17:14.343 "trtype": "TCP", 00:17:14.343 "adrfam": "IPv4", 00:17:14.343 "traddr": "10.0.0.1", 00:17:14.343 "trsvcid": "44494" 00:17:14.343 }, 00:17:14.343 "auth": { 00:17:14.343 "state": "completed", 00:17:14.343 "digest": "sha256", 00:17:14.343 "dhgroup": "ffdhe2048" 00:17:14.343 } 00:17:14.343 } 00:17:14.343 ]' 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.343 23:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.343 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.343 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.343 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.343 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.343 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.603 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.173 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.174 23:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.434 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.698 00:17:15.698 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.698 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.698 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.959 { 00:17:15.959 "cntlid": 15, 00:17:15.959 "qid": 0, 00:17:15.959 "state": "enabled", 00:17:15.959 "thread": "nvmf_tgt_poll_group_000", 00:17:15.959 "listen_address": { 00:17:15.959 "trtype": "TCP", 00:17:15.959 "adrfam": "IPv4", 00:17:15.959 "traddr": "10.0.0.2", 00:17:15.959 "trsvcid": "4420" 00:17:15.959 }, 00:17:15.959 "peer_address": { 00:17:15.959 "trtype": "TCP", 00:17:15.959 "adrfam": "IPv4", 00:17:15.959 "traddr": "10.0.0.1", 00:17:15.959 "trsvcid": "44522" 00:17:15.959 }, 00:17:15.959 "auth": { 00:17:15.959 "state": "completed", 00:17:15.959 "digest": "sha256", 00:17:15.959 "dhgroup": "ffdhe2048" 00:17:15.959 } 00:17:15.959 } 00:17:15.959 ]' 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.959 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.219 23:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:16.788 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.788 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:16.789 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.049 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.310 00:17:17.310 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.310 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.310 23:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.571 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.571 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.571 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.572 { 00:17:17.572 "cntlid": 17, 00:17:17.572 "qid": 0, 00:17:17.572 "state": "enabled", 00:17:17.572 "thread": "nvmf_tgt_poll_group_000", 00:17:17.572 "listen_address": { 00:17:17.572 "trtype": "TCP", 00:17:17.572 "adrfam": "IPv4", 00:17:17.572 "traddr": "10.0.0.2", 00:17:17.572 "trsvcid": "4420" 00:17:17.572 }, 00:17:17.572 "peer_address": { 00:17:17.572 "trtype": "TCP", 00:17:17.572 "adrfam": "IPv4", 00:17:17.572 "traddr": "10.0.0.1", 00:17:17.572 "trsvcid": "53514" 00:17:17.572 }, 00:17:17.572 "auth": { 00:17:17.572 "state": "completed", 00:17:17.572 "digest": "sha256", 00:17:17.572 "dhgroup": "ffdhe3072" 00:17:17.572 } 00:17:17.572 } 00:17:17.572 ]' 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.572 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.832 23:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:18.403 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.663 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.924 00:17:18.924 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.924 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.924 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.184 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.185 { 00:17:19.185 "cntlid": 19, 00:17:19.185 "qid": 0, 00:17:19.185 "state": "enabled", 00:17:19.185 "thread": "nvmf_tgt_poll_group_000", 00:17:19.185 "listen_address": { 00:17:19.185 "trtype": "TCP", 00:17:19.185 "adrfam": "IPv4", 00:17:19.185 "traddr": "10.0.0.2", 00:17:19.185 "trsvcid": "4420" 00:17:19.185 }, 00:17:19.185 "peer_address": { 00:17:19.185 "trtype": "TCP", 00:17:19.185 "adrfam": "IPv4", 00:17:19.185 "traddr": "10.0.0.1", 00:17:19.185 "trsvcid": "53554" 00:17:19.185 }, 00:17:19.185 "auth": { 00:17:19.185 "state": "completed", 00:17:19.185 "digest": "sha256", 00:17:19.185 "dhgroup": "ffdhe3072" 00:17:19.185 } 00:17:19.185 } 00:17:19.185 ]' 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.185 23:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.445 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:20.044 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.044 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.045 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.305 23:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.565 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.565 { 00:17:20.565 "cntlid": 21, 00:17:20.565 "qid": 0, 00:17:20.565 "state": "enabled", 00:17:20.565 "thread": "nvmf_tgt_poll_group_000", 00:17:20.565 "listen_address": { 00:17:20.565 "trtype": "TCP", 00:17:20.565 "adrfam": "IPv4", 00:17:20.565 "traddr": "10.0.0.2", 00:17:20.565 "trsvcid": "4420" 00:17:20.565 }, 00:17:20.565 "peer_address": { 00:17:20.565 "trtype": "TCP", 00:17:20.565 "adrfam": "IPv4", 00:17:20.565 "traddr": "10.0.0.1", 00:17:20.565 "trsvcid": "53578" 00:17:20.565 }, 00:17:20.565 "auth": { 00:17:20.565 "state": "completed", 00:17:20.565 "digest": "sha256", 00:17:20.565 "dhgroup": "ffdhe3072" 00:17:20.565 } 00:17:20.565 } 00:17:20.565 ]' 00:17:20.565 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.826 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.086 23:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.657 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.918 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.179 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.179 { 00:17:22.179 "cntlid": 23, 00:17:22.179 "qid": 0, 00:17:22.179 "state": "enabled", 00:17:22.179 "thread": "nvmf_tgt_poll_group_000", 00:17:22.179 "listen_address": { 00:17:22.179 "trtype": "TCP", 00:17:22.179 "adrfam": "IPv4", 00:17:22.179 "traddr": "10.0.0.2", 00:17:22.179 "trsvcid": "4420" 00:17:22.179 }, 00:17:22.179 "peer_address": { 00:17:22.179 "trtype": "TCP", 00:17:22.179 "adrfam": "IPv4", 00:17:22.179 "traddr": "10.0.0.1", 00:17:22.179 "trsvcid": "53600" 00:17:22.179 }, 00:17:22.179 "auth": { 00:17:22.179 "state": "completed", 00:17:22.179 "digest": "sha256", 00:17:22.179 "dhgroup": "ffdhe3072" 00:17:22.179 } 00:17:22.179 } 00:17:22.179 ]' 00:17:22.179 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.440 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.440 23:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.440 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.440 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.440 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.440 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.440 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.700 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.271 23:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.531 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.791 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.791 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.791 { 00:17:23.791 "cntlid": 25, 00:17:23.791 "qid": 0, 00:17:23.791 "state": "enabled", 00:17:23.791 "thread": "nvmf_tgt_poll_group_000", 00:17:23.791 "listen_address": { 00:17:23.791 "trtype": "TCP", 00:17:23.791 "adrfam": "IPv4", 00:17:23.791 "traddr": "10.0.0.2", 00:17:23.791 "trsvcid": "4420" 00:17:23.791 }, 00:17:23.791 "peer_address": { 00:17:23.791 "trtype": "TCP", 00:17:23.791 "adrfam": "IPv4", 00:17:23.792 "traddr": "10.0.0.1", 00:17:23.792 "trsvcid": "53628" 00:17:23.792 }, 00:17:23.792 "auth": { 00:17:23.792 "state": "completed", 00:17:23.792 "digest": "sha256", 00:17:23.792 "dhgroup": "ffdhe4096" 00:17:23.792 } 00:17:23.792 } 00:17:23.792 ]' 00:17:23.792 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.053 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.314 23:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:24.885 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.146 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.406 00:17:25.406 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.406 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.407 23:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.407 { 00:17:25.407 "cntlid": 27, 00:17:25.407 "qid": 0, 00:17:25.407 "state": "enabled", 00:17:25.407 "thread": "nvmf_tgt_poll_group_000", 00:17:25.407 "listen_address": { 00:17:25.407 "trtype": "TCP", 00:17:25.407 "adrfam": "IPv4", 00:17:25.407 "traddr": "10.0.0.2", 00:17:25.407 "trsvcid": "4420" 00:17:25.407 }, 00:17:25.407 "peer_address": { 00:17:25.407 "trtype": "TCP", 00:17:25.407 "adrfam": "IPv4", 00:17:25.407 "traddr": "10.0.0.1", 00:17:25.407 "trsvcid": "53642" 00:17:25.407 }, 00:17:25.407 "auth": { 00:17:25.407 "state": "completed", 00:17:25.407 "digest": "sha256", 00:17:25.407 "dhgroup": "ffdhe4096" 00:17:25.407 } 00:17:25.407 } 00:17:25.407 ]' 00:17:25.407 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.668 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.931 23:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.540 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.801 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.061 00:17:27.061 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.062 { 00:17:27.062 "cntlid": 29, 00:17:27.062 "qid": 0, 00:17:27.062 "state": "enabled", 00:17:27.062 "thread": "nvmf_tgt_poll_group_000", 00:17:27.062 "listen_address": { 00:17:27.062 "trtype": "TCP", 00:17:27.062 "adrfam": "IPv4", 00:17:27.062 "traddr": "10.0.0.2", 00:17:27.062 "trsvcid": "4420" 00:17:27.062 }, 00:17:27.062 "peer_address": { 00:17:27.062 "trtype": "TCP", 00:17:27.062 "adrfam": "IPv4", 00:17:27.062 "traddr": "10.0.0.1", 00:17:27.062 "trsvcid": "55194" 00:17:27.062 }, 00:17:27.062 "auth": { 00:17:27.062 "state": "completed", 00:17:27.062 "digest": "sha256", 00:17:27.062 "dhgroup": "ffdhe4096" 00:17:27.062 } 00:17:27.062 } 00:17:27.062 ]' 00:17:27.062 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.323 23:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.323 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.266 23:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.266 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.266 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.266 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.526 00:17:28.526 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.526 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.526 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.786 { 00:17:28.786 "cntlid": 31, 00:17:28.786 "qid": 0, 00:17:28.786 "state": "enabled", 00:17:28.786 "thread": "nvmf_tgt_poll_group_000", 00:17:28.786 "listen_address": { 00:17:28.786 "trtype": "TCP", 00:17:28.786 "adrfam": "IPv4", 00:17:28.786 "traddr": "10.0.0.2", 00:17:28.786 "trsvcid": "4420" 00:17:28.786 }, 00:17:28.786 "peer_address": { 00:17:28.786 "trtype": "TCP", 00:17:28.786 "adrfam": "IPv4", 00:17:28.786 "traddr": "10.0.0.1", 00:17:28.786 "trsvcid": "55222" 00:17:28.786 }, 00:17:28.786 "auth": { 00:17:28.786 "state": "completed", 00:17:28.786 "digest": "sha256", 00:17:28.786 "dhgroup": "ffdhe4096" 00:17:28.786 } 00:17:28.786 } 00:17:28.786 ]' 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.786 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.047 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.047 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.047 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.047 23:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.990 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.250 00:17:30.250 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.250 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.250 23:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.510 { 00:17:30.510 "cntlid": 33, 00:17:30.510 "qid": 0, 00:17:30.510 "state": "enabled", 00:17:30.510 "thread": "nvmf_tgt_poll_group_000", 00:17:30.510 "listen_address": { 00:17:30.510 "trtype": "TCP", 00:17:30.510 "adrfam": "IPv4", 00:17:30.510 "traddr": "10.0.0.2", 00:17:30.510 "trsvcid": "4420" 00:17:30.510 }, 00:17:30.510 "peer_address": { 00:17:30.510 "trtype": "TCP", 00:17:30.510 "adrfam": "IPv4", 00:17:30.510 "traddr": "10.0.0.1", 00:17:30.510 "trsvcid": "55248" 00:17:30.510 }, 00:17:30.510 "auth": { 00:17:30.510 "state": "completed", 00:17:30.510 "digest": "sha256", 00:17:30.510 "dhgroup": "ffdhe6144" 00:17:30.510 } 00:17:30.510 } 00:17:30.510 ]' 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.510 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.511 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.511 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.511 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.511 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.771 23:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.713 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.974 00:17:31.974 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.974 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.974 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.235 { 00:17:32.235 "cntlid": 35, 00:17:32.235 "qid": 0, 00:17:32.235 "state": "enabled", 00:17:32.235 "thread": "nvmf_tgt_poll_group_000", 00:17:32.235 "listen_address": { 00:17:32.235 "trtype": "TCP", 00:17:32.235 "adrfam": "IPv4", 00:17:32.235 "traddr": "10.0.0.2", 00:17:32.235 "trsvcid": "4420" 00:17:32.235 }, 00:17:32.235 "peer_address": { 00:17:32.235 "trtype": "TCP", 00:17:32.235 "adrfam": "IPv4", 00:17:32.235 "traddr": "10.0.0.1", 00:17:32.235 "trsvcid": "55264" 00:17:32.235 }, 00:17:32.235 "auth": { 00:17:32.235 "state": "completed", 00:17:32.235 "digest": "sha256", 00:17:32.235 "dhgroup": "ffdhe6144" 00:17:32.235 } 00:17:32.235 } 00:17:32.235 ]' 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.235 23:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.235 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.235 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.235 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.496 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.439 23:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.440 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.700 00:17:33.700 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.700 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.700 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.961 { 00:17:33.961 "cntlid": 37, 00:17:33.961 "qid": 0, 00:17:33.961 "state": "enabled", 00:17:33.961 "thread": "nvmf_tgt_poll_group_000", 00:17:33.961 "listen_address": { 00:17:33.961 "trtype": "TCP", 00:17:33.961 "adrfam": "IPv4", 00:17:33.961 "traddr": "10.0.0.2", 00:17:33.961 "trsvcid": "4420" 00:17:33.961 }, 00:17:33.961 "peer_address": { 00:17:33.961 "trtype": "TCP", 00:17:33.961 "adrfam": "IPv4", 00:17:33.961 "traddr": "10.0.0.1", 00:17:33.961 "trsvcid": "55288" 00:17:33.961 }, 00:17:33.961 "auth": { 00:17:33.961 "state": "completed", 00:17:33.961 "digest": "sha256", 00:17:33.961 "dhgroup": "ffdhe6144" 00:17:33.961 } 00:17:33.961 } 00:17:33.961 ]' 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.961 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.222 23:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:34.794 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.055 23:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.316 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.576 { 00:17:35.576 "cntlid": 39, 00:17:35.576 "qid": 0, 00:17:35.576 "state": "enabled", 00:17:35.576 "thread": "nvmf_tgt_poll_group_000", 00:17:35.576 "listen_address": { 00:17:35.576 "trtype": "TCP", 00:17:35.576 "adrfam": "IPv4", 00:17:35.576 "traddr": "10.0.0.2", 00:17:35.576 "trsvcid": "4420" 00:17:35.576 }, 00:17:35.576 "peer_address": { 00:17:35.576 "trtype": "TCP", 00:17:35.576 "adrfam": "IPv4", 00:17:35.576 "traddr": "10.0.0.1", 00:17:35.576 "trsvcid": "55316" 00:17:35.576 }, 00:17:35.576 "auth": { 00:17:35.576 "state": "completed", 00:17:35.576 "digest": "sha256", 00:17:35.576 "dhgroup": "ffdhe6144" 00:17:35.576 } 00:17:35.576 } 00:17:35.576 ]' 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.576 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.836 23:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.778 23:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.349 00:17:37.349 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.349 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.349 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.610 { 00:17:37.610 "cntlid": 41, 00:17:37.610 "qid": 0, 00:17:37.610 "state": "enabled", 00:17:37.610 "thread": "nvmf_tgt_poll_group_000", 00:17:37.610 "listen_address": { 00:17:37.610 "trtype": "TCP", 00:17:37.610 "adrfam": "IPv4", 00:17:37.610 "traddr": "10.0.0.2", 00:17:37.610 "trsvcid": "4420" 00:17:37.610 }, 00:17:37.610 "peer_address": { 00:17:37.610 "trtype": "TCP", 00:17:37.610 "adrfam": "IPv4", 00:17:37.610 "traddr": "10.0.0.1", 00:17:37.610 "trsvcid": "58194" 00:17:37.610 }, 00:17:37.610 "auth": { 00:17:37.610 "state": "completed", 00:17:37.610 "digest": "sha256", 00:17:37.610 "dhgroup": "ffdhe8192" 00:17:37.610 } 00:17:37.610 } 00:17:37.610 ]' 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.610 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.871 23:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:38.443 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.704 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.275 00:17:39.275 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.275 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.275 23:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.275 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.275 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.275 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.275 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.536 { 00:17:39.536 "cntlid": 43, 00:17:39.536 "qid": 0, 00:17:39.536 "state": "enabled", 00:17:39.536 "thread": "nvmf_tgt_poll_group_000", 00:17:39.536 "listen_address": { 00:17:39.536 "trtype": "TCP", 00:17:39.536 "adrfam": "IPv4", 00:17:39.536 "traddr": "10.0.0.2", 00:17:39.536 "trsvcid": "4420" 00:17:39.536 }, 00:17:39.536 "peer_address": { 00:17:39.536 "trtype": "TCP", 00:17:39.536 "adrfam": "IPv4", 00:17:39.536 "traddr": "10.0.0.1", 00:17:39.536 "trsvcid": "58226" 00:17:39.536 }, 00:17:39.536 "auth": { 00:17:39.536 "state": "completed", 00:17:39.536 "digest": "sha256", 00:17:39.536 "dhgroup": "ffdhe8192" 00:17:39.536 } 00:17:39.536 } 00:17:39.536 ]' 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.536 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.797 23:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:40.368 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:40.628 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.629 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.199 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.199 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.199 { 00:17:41.199 "cntlid": 45, 00:17:41.199 "qid": 0, 00:17:41.200 "state": "enabled", 00:17:41.200 "thread": "nvmf_tgt_poll_group_000", 00:17:41.200 "listen_address": { 00:17:41.200 "trtype": "TCP", 00:17:41.200 "adrfam": "IPv4", 00:17:41.200 "traddr": "10.0.0.2", 00:17:41.200 "trsvcid": "4420" 00:17:41.200 }, 00:17:41.200 "peer_address": { 00:17:41.200 "trtype": "TCP", 00:17:41.200 "adrfam": "IPv4", 00:17:41.200 "traddr": "10.0.0.1", 00:17:41.200 "trsvcid": "58252" 00:17:41.200 }, 00:17:41.200 "auth": { 00:17:41.200 "state": "completed", 00:17:41.200 "digest": "sha256", 00:17:41.200 "dhgroup": "ffdhe8192" 00:17:41.200 } 00:17:41.200 } 00:17:41.200 ]' 00:17:41.200 23:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.460 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.720 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.292 23:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.552 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.124 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.124 { 00:17:43.124 "cntlid": 47, 00:17:43.124 "qid": 0, 00:17:43.124 "state": "enabled", 00:17:43.124 "thread": "nvmf_tgt_poll_group_000", 00:17:43.124 "listen_address": { 00:17:43.124 "trtype": "TCP", 00:17:43.124 "adrfam": "IPv4", 00:17:43.124 "traddr": "10.0.0.2", 00:17:43.124 "trsvcid": "4420" 00:17:43.124 }, 00:17:43.124 "peer_address": { 00:17:43.124 "trtype": "TCP", 00:17:43.124 "adrfam": "IPv4", 00:17:43.124 "traddr": "10.0.0.1", 00:17:43.124 "trsvcid": "58274" 00:17:43.124 }, 00:17:43.124 "auth": { 00:17:43.124 "state": "completed", 00:17:43.124 "digest": "sha256", 00:17:43.124 "dhgroup": "ffdhe8192" 00:17:43.124 } 00:17:43.124 } 00:17:43.124 ]' 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.124 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.385 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.386 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.386 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.386 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.386 23:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.386 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:44.353 23:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.353 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.614 00:17:44.614 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.614 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.614 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.875 { 00:17:44.875 "cntlid": 49, 00:17:44.875 "qid": 0, 00:17:44.875 "state": "enabled", 00:17:44.875 "thread": "nvmf_tgt_poll_group_000", 00:17:44.875 "listen_address": { 00:17:44.875 "trtype": "TCP", 00:17:44.875 "adrfam": "IPv4", 00:17:44.875 "traddr": "10.0.0.2", 00:17:44.875 "trsvcid": "4420" 00:17:44.875 }, 00:17:44.875 "peer_address": { 00:17:44.875 "trtype": "TCP", 00:17:44.875 "adrfam": "IPv4", 00:17:44.875 "traddr": "10.0.0.1", 00:17:44.875 "trsvcid": "58292" 00:17:44.875 }, 00:17:44.875 "auth": { 00:17:44.875 "state": "completed", 00:17:44.875 "digest": "sha384", 00:17:44.875 "dhgroup": "null" 00:17:44.875 } 00:17:44.875 } 00:17:44.875 ]' 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.875 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.134 23:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:45.703 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:45.962 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:45.962 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.962 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.963 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.223 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.223 { 00:17:46.223 "cntlid": 51, 00:17:46.223 "qid": 0, 00:17:46.223 "state": "enabled", 00:17:46.223 "thread": "nvmf_tgt_poll_group_000", 00:17:46.223 "listen_address": { 00:17:46.223 "trtype": "TCP", 00:17:46.223 "adrfam": "IPv4", 00:17:46.223 "traddr": "10.0.0.2", 00:17:46.223 "trsvcid": "4420" 00:17:46.223 }, 00:17:46.223 "peer_address": { 00:17:46.223 "trtype": "TCP", 00:17:46.223 "adrfam": "IPv4", 00:17:46.223 "traddr": "10.0.0.1", 00:17:46.223 "trsvcid": "46894" 00:17:46.223 }, 00:17:46.223 "auth": { 00:17:46.223 "state": "completed", 00:17:46.223 "digest": "sha384", 00:17:46.223 "dhgroup": "null" 00:17:46.223 } 00:17:46.223 } 00:17:46.223 ]' 00:17:46.223 23:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.484 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.744 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:47.315 23:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:47.315 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.575 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.576 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.836 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.836 { 00:17:47.836 "cntlid": 53, 00:17:47.836 "qid": 0, 00:17:47.836 "state": "enabled", 00:17:47.836 "thread": "nvmf_tgt_poll_group_000", 00:17:47.836 "listen_address": { 00:17:47.836 "trtype": "TCP", 00:17:47.836 "adrfam": "IPv4", 00:17:47.836 "traddr": "10.0.0.2", 00:17:47.836 "trsvcid": "4420" 00:17:47.836 }, 00:17:47.836 "peer_address": { 00:17:47.836 "trtype": "TCP", 00:17:47.836 "adrfam": "IPv4", 00:17:47.836 "traddr": "10.0.0.1", 00:17:47.836 "trsvcid": "46914" 00:17:47.836 }, 00:17:47.836 "auth": { 00:17:47.836 "state": "completed", 00:17:47.836 "digest": "sha384", 00:17:47.836 "dhgroup": "null" 00:17:47.836 } 00:17:47.836 } 00:17:47.836 ]' 00:17:47.836 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.096 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.357 23:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:48.928 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.189 23:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.449 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.449 { 00:17:49.449 "cntlid": 55, 00:17:49.449 "qid": 0, 00:17:49.449 "state": "enabled", 00:17:49.449 "thread": "nvmf_tgt_poll_group_000", 00:17:49.449 "listen_address": { 00:17:49.449 "trtype": "TCP", 00:17:49.449 "adrfam": "IPv4", 00:17:49.449 "traddr": "10.0.0.2", 00:17:49.449 "trsvcid": "4420" 00:17:49.449 }, 00:17:49.449 "peer_address": { 00:17:49.449 "trtype": "TCP", 00:17:49.449 "adrfam": "IPv4", 00:17:49.449 "traddr": "10.0.0.1", 00:17:49.449 "trsvcid": "46950" 00:17:49.449 }, 00:17:49.449 "auth": { 00:17:49.449 "state": "completed", 00:17:49.449 "digest": "sha384", 00:17:49.449 "dhgroup": "null" 00:17:49.449 } 00:17:49.449 } 00:17:49.449 ]' 00:17:49.449 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.710 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.970 23:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.541 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.802 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.802 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.063 { 00:17:51.063 "cntlid": 57, 00:17:51.063 "qid": 0, 00:17:51.063 "state": "enabled", 00:17:51.063 "thread": "nvmf_tgt_poll_group_000", 00:17:51.063 "listen_address": { 00:17:51.063 "trtype": "TCP", 00:17:51.063 "adrfam": "IPv4", 00:17:51.063 "traddr": "10.0.0.2", 00:17:51.063 "trsvcid": "4420" 00:17:51.063 }, 00:17:51.063 "peer_address": { 00:17:51.063 "trtype": "TCP", 00:17:51.063 "adrfam": "IPv4", 00:17:51.063 "traddr": "10.0.0.1", 00:17:51.063 "trsvcid": "46972" 00:17:51.063 }, 00:17:51.063 "auth": { 00:17:51.063 "state": "completed", 00:17:51.063 "digest": "sha384", 00:17:51.063 "dhgroup": "ffdhe2048" 00:17:51.063 } 00:17:51.063 } 00:17:51.063 ]' 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.063 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.324 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.324 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.324 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.324 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.324 23:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.324 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.266 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.267 23:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.528 00:17:52.528 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.528 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.528 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.789 { 00:17:52.789 "cntlid": 59, 00:17:52.789 "qid": 0, 00:17:52.789 "state": "enabled", 00:17:52.789 "thread": "nvmf_tgt_poll_group_000", 00:17:52.789 "listen_address": { 00:17:52.789 "trtype": "TCP", 00:17:52.789 "adrfam": "IPv4", 00:17:52.789 "traddr": "10.0.0.2", 00:17:52.789 "trsvcid": "4420" 00:17:52.789 }, 00:17:52.789 "peer_address": { 00:17:52.789 "trtype": "TCP", 00:17:52.789 "adrfam": "IPv4", 00:17:52.789 "traddr": "10.0.0.1", 00:17:52.789 "trsvcid": "47004" 00:17:52.789 }, 00:17:52.789 "auth": { 00:17:52.789 "state": "completed", 00:17:52.789 "digest": "sha384", 00:17:52.789 "dhgroup": "ffdhe2048" 00:17:52.789 } 00:17:52.789 } 00:17:52.789 ]' 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.789 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.050 23:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:53.621 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.881 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.882 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.882 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.882 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.142 00:17:54.142 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.142 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.142 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.403 { 00:17:54.403 "cntlid": 61, 00:17:54.403 "qid": 0, 00:17:54.403 "state": "enabled", 00:17:54.403 "thread": "nvmf_tgt_poll_group_000", 00:17:54.403 "listen_address": { 00:17:54.403 "trtype": "TCP", 00:17:54.403 "adrfam": "IPv4", 00:17:54.403 "traddr": "10.0.0.2", 00:17:54.403 "trsvcid": "4420" 00:17:54.403 }, 00:17:54.403 "peer_address": { 00:17:54.403 "trtype": "TCP", 00:17:54.403 "adrfam": "IPv4", 00:17:54.403 "traddr": "10.0.0.1", 00:17:54.403 "trsvcid": "47014" 00:17:54.403 }, 00:17:54.403 "auth": { 00:17:54.403 "state": "completed", 00:17:54.403 "digest": "sha384", 00:17:54.403 "dhgroup": "ffdhe2048" 00:17:54.403 } 00:17:54.403 } 00:17:54.403 ]' 00:17:54.403 23:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.403 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.664 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.235 23:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.496 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.756 00:17:55.756 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.756 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.756 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.017 { 00:17:56.017 "cntlid": 63, 00:17:56.017 "qid": 0, 00:17:56.017 "state": "enabled", 00:17:56.017 "thread": "nvmf_tgt_poll_group_000", 00:17:56.017 "listen_address": { 00:17:56.017 "trtype": "TCP", 00:17:56.017 "adrfam": "IPv4", 00:17:56.017 "traddr": "10.0.0.2", 00:17:56.017 "trsvcid": "4420" 00:17:56.017 }, 00:17:56.017 "peer_address": { 00:17:56.017 "trtype": "TCP", 00:17:56.017 "adrfam": "IPv4", 00:17:56.017 "traddr": "10.0.0.1", 00:17:56.017 "trsvcid": "47042" 00:17:56.017 }, 00:17:56.017 "auth": { 00:17:56.017 "state": "completed", 00:17:56.017 "digest": "sha384", 00:17:56.017 "dhgroup": "ffdhe2048" 00:17:56.017 } 00:17:56.017 } 00:17:56.017 ]' 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.017 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.277 23:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:56.848 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.108 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.369 00:17:57.369 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.369 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.369 23:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.369 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.369 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.369 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.369 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.630 { 00:17:57.630 "cntlid": 65, 00:17:57.630 "qid": 0, 00:17:57.630 "state": "enabled", 00:17:57.630 "thread": "nvmf_tgt_poll_group_000", 00:17:57.630 "listen_address": { 00:17:57.630 "trtype": "TCP", 00:17:57.630 "adrfam": "IPv4", 00:17:57.630 "traddr": "10.0.0.2", 00:17:57.630 "trsvcid": "4420" 00:17:57.630 }, 00:17:57.630 "peer_address": { 00:17:57.630 "trtype": "TCP", 00:17:57.630 "adrfam": "IPv4", 00:17:57.630 "traddr": "10.0.0.1", 00:17:57.630 "trsvcid": "45344" 00:17:57.630 }, 00:17:57.630 "auth": { 00:17:57.630 "state": "completed", 00:17:57.630 "digest": "sha384", 00:17:57.630 "dhgroup": "ffdhe3072" 00:17:57.630 } 00:17:57.630 } 00:17:57.630 ]' 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.630 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.891 23:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:58.461 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.722 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.983 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.983 { 00:17:58.983 "cntlid": 67, 00:17:58.983 "qid": 0, 00:17:58.983 "state": "enabled", 00:17:58.983 "thread": "nvmf_tgt_poll_group_000", 00:17:58.983 "listen_address": { 00:17:58.983 "trtype": "TCP", 00:17:58.983 "adrfam": "IPv4", 00:17:58.983 "traddr": "10.0.0.2", 00:17:58.983 "trsvcid": "4420" 00:17:58.983 }, 00:17:58.983 "peer_address": { 00:17:58.983 "trtype": "TCP", 00:17:58.983 "adrfam": "IPv4", 00:17:58.983 "traddr": "10.0.0.1", 00:17:58.983 "trsvcid": "45382" 00:17:58.983 }, 00:17:58.983 "auth": { 00:17:58.983 "state": "completed", 00:17:58.983 "digest": "sha384", 00:17:58.983 "dhgroup": "ffdhe3072" 00:17:58.983 } 00:17:58.983 } 00:17:58.983 ]' 00:17:58.983 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.243 23:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.503 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.074 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.335 23:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.637 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.637 { 00:18:00.637 "cntlid": 69, 00:18:00.637 "qid": 0, 00:18:00.637 "state": "enabled", 00:18:00.637 "thread": "nvmf_tgt_poll_group_000", 00:18:00.637 "listen_address": { 00:18:00.637 "trtype": "TCP", 00:18:00.637 "adrfam": "IPv4", 00:18:00.637 "traddr": "10.0.0.2", 00:18:00.637 "trsvcid": "4420" 00:18:00.637 }, 00:18:00.637 "peer_address": { 00:18:00.637 "trtype": "TCP", 00:18:00.637 "adrfam": "IPv4", 00:18:00.637 "traddr": "10.0.0.1", 00:18:00.637 "trsvcid": "45416" 00:18:00.637 }, 00:18:00.637 "auth": { 00:18:00.637 "state": "completed", 00:18:00.637 "digest": "sha384", 00:18:00.637 "dhgroup": "ffdhe3072" 00:18:00.637 } 00:18:00.637 } 00:18:00.637 ]' 00:18:00.637 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.905 23:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:01.846 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.847 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.107 00:18:02.107 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.107 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.107 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.367 { 00:18:02.367 "cntlid": 71, 00:18:02.367 "qid": 0, 00:18:02.367 "state": "enabled", 00:18:02.367 "thread": "nvmf_tgt_poll_group_000", 00:18:02.367 "listen_address": { 00:18:02.367 "trtype": "TCP", 00:18:02.367 "adrfam": "IPv4", 00:18:02.367 "traddr": "10.0.0.2", 00:18:02.367 "trsvcid": "4420" 00:18:02.367 }, 00:18:02.367 "peer_address": { 00:18:02.367 "trtype": "TCP", 00:18:02.367 "adrfam": "IPv4", 00:18:02.367 "traddr": "10.0.0.1", 00:18:02.367 "trsvcid": "45442" 00:18:02.367 }, 00:18:02.367 "auth": { 00:18:02.367 "state": "completed", 00:18:02.367 "digest": "sha384", 00:18:02.367 "dhgroup": "ffdhe3072" 00:18:02.367 } 00:18:02.367 } 00:18:02.367 ]' 00:18:02.367 23:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.367 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.627 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:03.568 23:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.568 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.828 00:18:03.828 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.828 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.828 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.089 { 00:18:04.089 "cntlid": 73, 00:18:04.089 "qid": 0, 00:18:04.089 "state": "enabled", 00:18:04.089 "thread": "nvmf_tgt_poll_group_000", 00:18:04.089 "listen_address": { 00:18:04.089 "trtype": "TCP", 00:18:04.089 "adrfam": "IPv4", 00:18:04.089 "traddr": "10.0.0.2", 00:18:04.089 "trsvcid": "4420" 00:18:04.089 }, 00:18:04.089 "peer_address": { 00:18:04.089 "trtype": "TCP", 00:18:04.089 "adrfam": "IPv4", 00:18:04.089 "traddr": "10.0.0.1", 00:18:04.089 "trsvcid": "45468" 00:18:04.089 }, 00:18:04.089 "auth": { 00:18:04.089 "state": "completed", 00:18:04.089 "digest": "sha384", 00:18:04.089 "dhgroup": "ffdhe4096" 00:18:04.089 } 00:18:04.089 } 00:18:04.089 ]' 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.089 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.348 23:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.918 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.178 23:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.439 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.439 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.699 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.699 { 00:18:05.699 "cntlid": 75, 00:18:05.700 "qid": 0, 00:18:05.700 "state": "enabled", 00:18:05.700 "thread": "nvmf_tgt_poll_group_000", 00:18:05.700 "listen_address": { 00:18:05.700 "trtype": "TCP", 00:18:05.700 "adrfam": "IPv4", 00:18:05.700 "traddr": "10.0.0.2", 00:18:05.700 "trsvcid": "4420" 00:18:05.700 }, 00:18:05.700 "peer_address": { 00:18:05.700 "trtype": "TCP", 00:18:05.700 "adrfam": "IPv4", 00:18:05.700 "traddr": "10.0.0.1", 00:18:05.700 "trsvcid": "45500" 00:18:05.700 }, 00:18:05.700 "auth": { 00:18:05.700 "state": "completed", 00:18:05.700 "digest": "sha384", 00:18:05.700 "dhgroup": "ffdhe4096" 00:18:05.700 } 00:18:05.700 } 00:18:05.700 ]' 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.700 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.961 23:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.531 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.791 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.792 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.051 00:18:07.051 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.051 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.052 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.312 { 00:18:07.312 "cntlid": 77, 00:18:07.312 "qid": 0, 00:18:07.312 "state": "enabled", 00:18:07.312 "thread": "nvmf_tgt_poll_group_000", 00:18:07.312 "listen_address": { 00:18:07.312 "trtype": "TCP", 00:18:07.312 "adrfam": "IPv4", 00:18:07.312 "traddr": "10.0.0.2", 00:18:07.312 "trsvcid": "4420" 00:18:07.312 }, 00:18:07.312 "peer_address": { 00:18:07.312 "trtype": "TCP", 00:18:07.312 "adrfam": "IPv4", 00:18:07.312 "traddr": "10.0.0.1", 00:18:07.312 "trsvcid": "40002" 00:18:07.312 }, 00:18:07.312 "auth": { 00:18:07.312 "state": "completed", 00:18:07.312 "digest": "sha384", 00:18:07.312 "dhgroup": "ffdhe4096" 00:18:07.312 } 00:18:07.312 } 00:18:07.312 ]' 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.312 23:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.573 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:08.143 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.144 23:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.404 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.665 00:18:08.665 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.665 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.665 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.926 { 00:18:08.926 "cntlid": 79, 00:18:08.926 "qid": 0, 00:18:08.926 "state": "enabled", 00:18:08.926 "thread": "nvmf_tgt_poll_group_000", 00:18:08.926 "listen_address": { 00:18:08.926 "trtype": "TCP", 00:18:08.926 "adrfam": "IPv4", 00:18:08.926 "traddr": "10.0.0.2", 00:18:08.926 "trsvcid": "4420" 00:18:08.926 }, 00:18:08.926 "peer_address": { 00:18:08.926 "trtype": "TCP", 00:18:08.926 "adrfam": "IPv4", 00:18:08.926 "traddr": "10.0.0.1", 00:18:08.926 "trsvcid": "40018" 00:18:08.926 }, 00:18:08.926 "auth": { 00:18:08.926 "state": "completed", 00:18:08.926 "digest": "sha384", 00:18:08.926 "dhgroup": "ffdhe4096" 00:18:08.926 } 00:18:08.926 } 00:18:08.926 ]' 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.926 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.187 23:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.757 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.017 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.018 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.018 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.018 23:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.278 00:18:10.278 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.278 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.278 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.539 { 00:18:10.539 "cntlid": 81, 00:18:10.539 "qid": 0, 00:18:10.539 "state": "enabled", 00:18:10.539 "thread": "nvmf_tgt_poll_group_000", 00:18:10.539 "listen_address": { 00:18:10.539 "trtype": "TCP", 00:18:10.539 "adrfam": "IPv4", 00:18:10.539 "traddr": "10.0.0.2", 00:18:10.539 "trsvcid": "4420" 00:18:10.539 }, 00:18:10.539 "peer_address": { 00:18:10.539 "trtype": "TCP", 00:18:10.539 "adrfam": "IPv4", 00:18:10.539 "traddr": "10.0.0.1", 00:18:10.539 "trsvcid": "40052" 00:18:10.539 }, 00:18:10.539 "auth": { 00:18:10.539 "state": "completed", 00:18:10.539 "digest": "sha384", 00:18:10.539 "dhgroup": "ffdhe6144" 00:18:10.539 } 00:18:10.539 } 00:18:10.539 ]' 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.539 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.799 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.799 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.799 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.799 23:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.741 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.002 00:18:12.002 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.002 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.002 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.263 { 00:18:12.263 "cntlid": 83, 00:18:12.263 "qid": 0, 00:18:12.263 "state": "enabled", 00:18:12.263 "thread": "nvmf_tgt_poll_group_000", 00:18:12.263 "listen_address": { 00:18:12.263 "trtype": "TCP", 00:18:12.263 "adrfam": "IPv4", 00:18:12.263 "traddr": "10.0.0.2", 00:18:12.263 "trsvcid": "4420" 00:18:12.263 }, 00:18:12.263 "peer_address": { 00:18:12.263 "trtype": "TCP", 00:18:12.263 "adrfam": "IPv4", 00:18:12.263 "traddr": "10.0.0.1", 00:18:12.263 "trsvcid": "40072" 00:18:12.263 }, 00:18:12.263 "auth": { 00:18:12.263 "state": "completed", 00:18:12.263 "digest": "sha384", 00:18:12.263 "dhgroup": "ffdhe6144" 00:18:12.263 } 00:18:12.263 } 00:18:12.263 ]' 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.263 23:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.263 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.263 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.523 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.523 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.523 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.523 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.466 23:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.466 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.727 00:18:13.727 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.727 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.727 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.988 { 00:18:13.988 "cntlid": 85, 00:18:13.988 "qid": 0, 00:18:13.988 "state": "enabled", 00:18:13.988 "thread": "nvmf_tgt_poll_group_000", 00:18:13.988 "listen_address": { 00:18:13.988 "trtype": "TCP", 00:18:13.988 "adrfam": "IPv4", 00:18:13.988 "traddr": "10.0.0.2", 00:18:13.988 "trsvcid": "4420" 00:18:13.988 }, 00:18:13.988 "peer_address": { 00:18:13.988 "trtype": "TCP", 00:18:13.988 "adrfam": "IPv4", 00:18:13.988 "traddr": "10.0.0.1", 00:18:13.988 "trsvcid": "40102" 00:18:13.988 }, 00:18:13.988 "auth": { 00:18:13.988 "state": "completed", 00:18:13.988 "digest": "sha384", 00:18:13.988 "dhgroup": "ffdhe6144" 00:18:13.988 } 00:18:13.988 } 00:18:13.988 ]' 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.988 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.248 23:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.190 23:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.451 00:18:15.451 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.451 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.451 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.712 { 00:18:15.712 "cntlid": 87, 00:18:15.712 "qid": 0, 00:18:15.712 "state": "enabled", 00:18:15.712 "thread": "nvmf_tgt_poll_group_000", 00:18:15.712 "listen_address": { 00:18:15.712 "trtype": "TCP", 00:18:15.712 "adrfam": "IPv4", 00:18:15.712 "traddr": "10.0.0.2", 00:18:15.712 "trsvcid": "4420" 00:18:15.712 }, 00:18:15.712 "peer_address": { 00:18:15.712 "trtype": "TCP", 00:18:15.712 "adrfam": "IPv4", 00:18:15.712 "traddr": "10.0.0.1", 00:18:15.712 "trsvcid": "40138" 00:18:15.712 }, 00:18:15.712 "auth": { 00:18:15.712 "state": "completed", 00:18:15.712 "digest": "sha384", 00:18:15.712 "dhgroup": "ffdhe6144" 00:18:15.712 } 00:18:15.712 } 00:18:15.712 ]' 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.712 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.972 23:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.912 23:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.483 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.483 { 00:18:17.483 "cntlid": 89, 00:18:17.483 "qid": 0, 00:18:17.483 "state": "enabled", 00:18:17.483 "thread": "nvmf_tgt_poll_group_000", 00:18:17.483 "listen_address": { 00:18:17.483 "trtype": "TCP", 00:18:17.483 "adrfam": "IPv4", 00:18:17.483 "traddr": "10.0.0.2", 00:18:17.483 "trsvcid": "4420" 00:18:17.483 }, 00:18:17.483 "peer_address": { 00:18:17.483 "trtype": "TCP", 00:18:17.483 "adrfam": "IPv4", 00:18:17.483 "traddr": "10.0.0.1", 00:18:17.483 "trsvcid": "47856" 00:18:17.483 }, 00:18:17.483 "auth": { 00:18:17.483 "state": "completed", 00:18:17.483 "digest": "sha384", 00:18:17.483 "dhgroup": "ffdhe8192" 00:18:17.483 } 00:18:17.483 } 00:18:17.483 ]' 00:18:17.483 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.743 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.004 23:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:18.606 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.606 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:18.606 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.606 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.607 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.607 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.607 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.607 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.867 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.437 00:18:19.437 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.437 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.437 23:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.437 { 00:18:19.437 "cntlid": 91, 00:18:19.437 "qid": 0, 00:18:19.437 "state": "enabled", 00:18:19.437 "thread": "nvmf_tgt_poll_group_000", 00:18:19.437 "listen_address": { 00:18:19.437 "trtype": "TCP", 00:18:19.437 "adrfam": "IPv4", 00:18:19.437 "traddr": "10.0.0.2", 00:18:19.437 "trsvcid": "4420" 00:18:19.437 }, 00:18:19.437 "peer_address": { 00:18:19.437 "trtype": "TCP", 00:18:19.437 "adrfam": "IPv4", 00:18:19.437 "traddr": "10.0.0.1", 00:18:19.437 "trsvcid": "47872" 00:18:19.437 }, 00:18:19.437 "auth": { 00:18:19.437 "state": "completed", 00:18:19.437 "digest": "sha384", 00:18:19.437 "dhgroup": "ffdhe8192" 00:18:19.437 } 00:18:19.437 } 00:18:19.437 ]' 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.437 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.698 23:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.640 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.211 00:18:21.211 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.211 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.211 23:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.472 { 00:18:21.472 "cntlid": 93, 00:18:21.472 "qid": 0, 00:18:21.472 "state": "enabled", 00:18:21.472 "thread": "nvmf_tgt_poll_group_000", 00:18:21.472 "listen_address": { 00:18:21.472 "trtype": "TCP", 00:18:21.472 "adrfam": "IPv4", 00:18:21.472 "traddr": "10.0.0.2", 00:18:21.472 "trsvcid": "4420" 00:18:21.472 }, 00:18:21.472 "peer_address": { 00:18:21.472 "trtype": "TCP", 00:18:21.472 "adrfam": "IPv4", 00:18:21.472 "traddr": "10.0.0.1", 00:18:21.472 "trsvcid": "47894" 00:18:21.472 }, 00:18:21.472 "auth": { 00:18:21.472 "state": "completed", 00:18:21.472 "digest": "sha384", 00:18:21.472 "dhgroup": "ffdhe8192" 00:18:21.472 } 00:18:21.472 } 00:18:21.472 ]' 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.472 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.733 23:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.305 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.566 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.137 00:18:23.137 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.137 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.137 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.398 { 00:18:23.398 "cntlid": 95, 00:18:23.398 "qid": 0, 00:18:23.398 "state": "enabled", 00:18:23.398 "thread": "nvmf_tgt_poll_group_000", 00:18:23.398 "listen_address": { 00:18:23.398 "trtype": "TCP", 00:18:23.398 "adrfam": "IPv4", 00:18:23.398 "traddr": "10.0.0.2", 00:18:23.398 "trsvcid": "4420" 00:18:23.398 }, 00:18:23.398 "peer_address": { 00:18:23.398 "trtype": "TCP", 00:18:23.398 "adrfam": "IPv4", 00:18:23.398 "traddr": "10.0.0.1", 00:18:23.398 "trsvcid": "47932" 00:18:23.398 }, 00:18:23.398 "auth": { 00:18:23.398 "state": "completed", 00:18:23.398 "digest": "sha384", 00:18:23.398 "dhgroup": "ffdhe8192" 00:18:23.398 } 00:18:23.398 } 00:18:23.398 ]' 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.398 23:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.398 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.398 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.398 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.398 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.398 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.659 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.231 23:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.492 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.752 00:18:24.752 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.752 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.752 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.014 { 00:18:25.014 "cntlid": 97, 00:18:25.014 "qid": 0, 00:18:25.014 "state": "enabled", 00:18:25.014 "thread": "nvmf_tgt_poll_group_000", 00:18:25.014 "listen_address": { 00:18:25.014 "trtype": "TCP", 00:18:25.014 "adrfam": "IPv4", 00:18:25.014 "traddr": "10.0.0.2", 00:18:25.014 "trsvcid": "4420" 00:18:25.014 }, 00:18:25.014 "peer_address": { 00:18:25.014 "trtype": "TCP", 00:18:25.014 "adrfam": "IPv4", 00:18:25.014 "traddr": "10.0.0.1", 00:18:25.014 "trsvcid": "47946" 00:18:25.014 }, 00:18:25.014 "auth": { 00:18:25.014 "state": "completed", 00:18:25.014 "digest": "sha512", 00:18:25.014 "dhgroup": "null" 00:18:25.014 } 00:18:25.014 } 00:18:25.014 ]' 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.014 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.274 23:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.845 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.106 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.367 00:18:26.367 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.367 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.367 23:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.367 { 00:18:26.367 "cntlid": 99, 00:18:26.367 "qid": 0, 00:18:26.367 "state": "enabled", 00:18:26.367 "thread": "nvmf_tgt_poll_group_000", 00:18:26.367 "listen_address": { 00:18:26.367 "trtype": "TCP", 00:18:26.367 "adrfam": "IPv4", 00:18:26.367 "traddr": "10.0.0.2", 00:18:26.367 "trsvcid": "4420" 00:18:26.367 }, 00:18:26.367 "peer_address": { 00:18:26.367 "trtype": "TCP", 00:18:26.367 "adrfam": "IPv4", 00:18:26.367 "traddr": "10.0.0.1", 00:18:26.367 "trsvcid": "46050" 00:18:26.367 }, 00:18:26.367 "auth": { 00:18:26.367 "state": "completed", 00:18:26.367 "digest": "sha512", 00:18:26.367 "dhgroup": "null" 00:18:26.367 } 00:18:26.367 } 00:18:26.367 ]' 00:18:26.367 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.628 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.888 23:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.459 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.721 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.982 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.982 { 00:18:27.982 "cntlid": 101, 00:18:27.982 "qid": 0, 00:18:27.982 "state": "enabled", 00:18:27.982 "thread": "nvmf_tgt_poll_group_000", 00:18:27.982 "listen_address": { 00:18:27.982 "trtype": "TCP", 00:18:27.982 "adrfam": "IPv4", 00:18:27.982 "traddr": "10.0.0.2", 00:18:27.982 "trsvcid": "4420" 00:18:27.982 }, 00:18:27.982 "peer_address": { 00:18:27.982 "trtype": "TCP", 00:18:27.982 "adrfam": "IPv4", 00:18:27.982 "traddr": "10.0.0.1", 00:18:27.982 "trsvcid": "46062" 00:18:27.982 }, 00:18:27.982 "auth": { 00:18:27.982 "state": "completed", 00:18:27.982 "digest": "sha512", 00:18:27.982 "dhgroup": "null" 00:18:27.982 } 00:18:27.982 } 00:18:27.982 ]' 00:18:27.982 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.243 23:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.243 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.185 23:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.446 00:18:29.446 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.446 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.446 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.707 { 00:18:29.707 "cntlid": 103, 00:18:29.707 "qid": 0, 00:18:29.707 "state": "enabled", 00:18:29.707 "thread": "nvmf_tgt_poll_group_000", 00:18:29.707 "listen_address": { 00:18:29.707 "trtype": "TCP", 00:18:29.707 "adrfam": "IPv4", 00:18:29.707 "traddr": "10.0.0.2", 00:18:29.707 "trsvcid": "4420" 00:18:29.707 }, 00:18:29.707 "peer_address": { 00:18:29.707 "trtype": "TCP", 00:18:29.707 "adrfam": "IPv4", 00:18:29.707 "traddr": "10.0.0.1", 00:18:29.707 "trsvcid": "46090" 00:18:29.707 }, 00:18:29.707 "auth": { 00:18:29.707 "state": "completed", 00:18:29.707 "digest": "sha512", 00:18:29.707 "dhgroup": "null" 00:18:29.707 } 00:18:29.707 } 00:18:29.707 ]' 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.707 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.967 23:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:30.539 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.539 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:30.539 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.539 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.800 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.062 00:18:31.062 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.062 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.062 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.324 { 00:18:31.324 "cntlid": 105, 00:18:31.324 "qid": 0, 00:18:31.324 "state": "enabled", 00:18:31.324 "thread": "nvmf_tgt_poll_group_000", 00:18:31.324 "listen_address": { 00:18:31.324 "trtype": "TCP", 00:18:31.324 "adrfam": "IPv4", 00:18:31.324 "traddr": "10.0.0.2", 00:18:31.324 "trsvcid": "4420" 00:18:31.324 }, 00:18:31.324 "peer_address": { 00:18:31.324 "trtype": "TCP", 00:18:31.324 "adrfam": "IPv4", 00:18:31.324 "traddr": "10.0.0.1", 00:18:31.324 "trsvcid": "46124" 00:18:31.324 }, 00:18:31.324 "auth": { 00:18:31.324 "state": "completed", 00:18:31.324 "digest": "sha512", 00:18:31.324 "dhgroup": "ffdhe2048" 00:18:31.324 } 00:18:31.324 } 00:18:31.324 ]' 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.324 23:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.324 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.324 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.324 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.585 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.157 23:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.418 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.678 00:18:32.678 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.678 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.678 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.939 { 00:18:32.939 "cntlid": 107, 00:18:32.939 "qid": 0, 00:18:32.939 "state": "enabled", 00:18:32.939 "thread": "nvmf_tgt_poll_group_000", 00:18:32.939 "listen_address": { 00:18:32.939 "trtype": "TCP", 00:18:32.939 "adrfam": "IPv4", 00:18:32.939 "traddr": "10.0.0.2", 00:18:32.939 "trsvcid": "4420" 00:18:32.939 }, 00:18:32.939 "peer_address": { 00:18:32.939 "trtype": "TCP", 00:18:32.939 "adrfam": "IPv4", 00:18:32.939 "traddr": "10.0.0.1", 00:18:32.939 "trsvcid": "46154" 00:18:32.939 }, 00:18:32.939 "auth": { 00:18:32.939 "state": "completed", 00:18:32.939 "digest": "sha512", 00:18:32.939 "dhgroup": "ffdhe2048" 00:18:32.939 } 00:18:32.939 } 00:18:32.939 ]' 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.939 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.200 23:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.772 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.033 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.294 00:18:34.294 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.294 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.294 23:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.294 { 00:18:34.294 "cntlid": 109, 00:18:34.294 "qid": 0, 00:18:34.294 "state": "enabled", 00:18:34.294 "thread": "nvmf_tgt_poll_group_000", 00:18:34.294 "listen_address": { 00:18:34.294 "trtype": "TCP", 00:18:34.294 "adrfam": "IPv4", 00:18:34.294 "traddr": "10.0.0.2", 00:18:34.294 "trsvcid": "4420" 00:18:34.294 }, 00:18:34.294 "peer_address": { 00:18:34.294 "trtype": "TCP", 00:18:34.294 "adrfam": "IPv4", 00:18:34.294 "traddr": "10.0.0.1", 00:18:34.294 "trsvcid": "46184" 00:18:34.294 }, 00:18:34.294 "auth": { 00:18:34.294 "state": "completed", 00:18:34.294 "digest": "sha512", 00:18:34.294 "dhgroup": "ffdhe2048" 00:18:34.294 } 00:18:34.294 } 00:18:34.294 ]' 00:18:34.294 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.555 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.816 23:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.428 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.695 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.695 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.961 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.961 { 00:18:35.961 "cntlid": 111, 00:18:35.961 "qid": 0, 00:18:35.962 "state": "enabled", 00:18:35.962 "thread": "nvmf_tgt_poll_group_000", 00:18:35.962 "listen_address": { 00:18:35.962 "trtype": "TCP", 00:18:35.962 "adrfam": "IPv4", 00:18:35.962 "traddr": "10.0.0.2", 00:18:35.962 "trsvcid": "4420" 00:18:35.962 }, 00:18:35.962 "peer_address": { 00:18:35.962 "trtype": "TCP", 00:18:35.962 "adrfam": "IPv4", 00:18:35.962 "traddr": "10.0.0.1", 00:18:35.962 "trsvcid": "33728" 00:18:35.962 }, 00:18:35.962 "auth": { 00:18:35.962 "state": "completed", 00:18:35.962 "digest": "sha512", 00:18:35.962 "dhgroup": "ffdhe2048" 00:18:35.962 } 00:18:35.962 } 00:18:35.962 ]' 00:18:35.962 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.962 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.962 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.222 23:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.164 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.165 23:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.426 00:18:37.426 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.426 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.426 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.687 { 00:18:37.687 "cntlid": 113, 00:18:37.687 "qid": 0, 00:18:37.687 "state": "enabled", 00:18:37.687 "thread": "nvmf_tgt_poll_group_000", 00:18:37.687 "listen_address": { 00:18:37.687 "trtype": "TCP", 00:18:37.687 "adrfam": "IPv4", 00:18:37.687 "traddr": "10.0.0.2", 00:18:37.687 "trsvcid": "4420" 00:18:37.687 }, 00:18:37.687 "peer_address": { 00:18:37.687 "trtype": "TCP", 00:18:37.687 "adrfam": "IPv4", 00:18:37.687 "traddr": "10.0.0.1", 00:18:37.687 "trsvcid": "33768" 00:18:37.687 }, 00:18:37.687 "auth": { 00:18:37.687 "state": "completed", 00:18:37.687 "digest": "sha512", 00:18:37.687 "dhgroup": "ffdhe3072" 00:18:37.687 } 00:18:37.687 } 00:18:37.687 ]' 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.687 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.948 23:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:38.519 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.519 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:38.519 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.520 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.520 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.520 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.520 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.520 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.780 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.040 00:18:39.040 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.041 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.041 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.301 { 00:18:39.301 "cntlid": 115, 00:18:39.301 "qid": 0, 00:18:39.301 "state": "enabled", 00:18:39.301 "thread": "nvmf_tgt_poll_group_000", 00:18:39.301 "listen_address": { 00:18:39.301 "trtype": "TCP", 00:18:39.301 "adrfam": "IPv4", 00:18:39.301 "traddr": "10.0.0.2", 00:18:39.301 "trsvcid": "4420" 00:18:39.301 }, 00:18:39.301 "peer_address": { 00:18:39.301 "trtype": "TCP", 00:18:39.301 "adrfam": "IPv4", 00:18:39.301 "traddr": "10.0.0.1", 00:18:39.301 "trsvcid": "33776" 00:18:39.301 }, 00:18:39.301 "auth": { 00:18:39.301 "state": "completed", 00:18:39.301 "digest": "sha512", 00:18:39.301 "dhgroup": "ffdhe3072" 00:18:39.301 } 00:18:39.301 } 00:18:39.301 ]' 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.301 23:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.562 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.133 23:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.394 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.655 00:18:40.655 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.655 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.655 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.916 { 00:18:40.916 "cntlid": 117, 00:18:40.916 "qid": 0, 00:18:40.916 "state": "enabled", 00:18:40.916 "thread": "nvmf_tgt_poll_group_000", 00:18:40.916 "listen_address": { 00:18:40.916 "trtype": "TCP", 00:18:40.916 "adrfam": "IPv4", 00:18:40.916 "traddr": "10.0.0.2", 00:18:40.916 "trsvcid": "4420" 00:18:40.916 }, 00:18:40.916 "peer_address": { 00:18:40.916 "trtype": "TCP", 00:18:40.916 "adrfam": "IPv4", 00:18:40.916 "traddr": "10.0.0.1", 00:18:40.916 "trsvcid": "33812" 00:18:40.916 }, 00:18:40.916 "auth": { 00:18:40.916 "state": "completed", 00:18:40.916 "digest": "sha512", 00:18:40.916 "dhgroup": "ffdhe3072" 00:18:40.916 } 00:18:40.916 } 00:18:40.916 ]' 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.916 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.176 23:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.747 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:42.007 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.008 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.008 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.008 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.008 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.268 00:18:42.268 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.268 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.268 23:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.528 { 00:18:42.528 "cntlid": 119, 00:18:42.528 "qid": 0, 00:18:42.528 "state": "enabled", 00:18:42.528 "thread": "nvmf_tgt_poll_group_000", 00:18:42.528 "listen_address": { 00:18:42.528 "trtype": "TCP", 00:18:42.528 "adrfam": "IPv4", 00:18:42.528 "traddr": "10.0.0.2", 00:18:42.528 "trsvcid": "4420" 00:18:42.528 }, 00:18:42.528 "peer_address": { 00:18:42.528 "trtype": "TCP", 00:18:42.528 "adrfam": "IPv4", 00:18:42.528 "traddr": "10.0.0.1", 00:18:42.528 "trsvcid": "33838" 00:18:42.528 }, 00:18:42.528 "auth": { 00:18:42.528 "state": "completed", 00:18:42.528 "digest": "sha512", 00:18:42.528 "dhgroup": "ffdhe3072" 00:18:42.528 } 00:18:42.528 } 00:18:42.528 ]' 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.528 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.789 23:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.358 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.618 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.878 00:18:43.878 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.878 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.878 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.138 { 00:18:44.138 "cntlid": 121, 00:18:44.138 "qid": 0, 00:18:44.138 "state": "enabled", 00:18:44.138 "thread": "nvmf_tgt_poll_group_000", 00:18:44.138 "listen_address": { 00:18:44.138 "trtype": "TCP", 00:18:44.138 "adrfam": "IPv4", 00:18:44.138 "traddr": "10.0.0.2", 00:18:44.138 "trsvcid": "4420" 00:18:44.138 }, 00:18:44.138 "peer_address": { 00:18:44.138 "trtype": "TCP", 00:18:44.138 "adrfam": "IPv4", 00:18:44.138 "traddr": "10.0.0.1", 00:18:44.138 "trsvcid": "33874" 00:18:44.138 }, 00:18:44.138 "auth": { 00:18:44.138 "state": "completed", 00:18:44.138 "digest": "sha512", 00:18:44.138 "dhgroup": "ffdhe4096" 00:18:44.138 } 00:18:44.138 } 00:18:44.138 ]' 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.138 23:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.398 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.338 23:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.599 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.599 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.599 { 00:18:45.599 "cntlid": 123, 00:18:45.599 "qid": 0, 00:18:45.599 "state": "enabled", 00:18:45.599 "thread": "nvmf_tgt_poll_group_000", 00:18:45.599 "listen_address": { 00:18:45.599 "trtype": "TCP", 00:18:45.599 "adrfam": "IPv4", 00:18:45.599 "traddr": "10.0.0.2", 00:18:45.599 "trsvcid": "4420" 00:18:45.599 }, 00:18:45.599 "peer_address": { 00:18:45.599 "trtype": "TCP", 00:18:45.599 "adrfam": "IPv4", 00:18:45.599 "traddr": "10.0.0.1", 00:18:45.599 "trsvcid": "33900" 00:18:45.599 }, 00:18:45.599 "auth": { 00:18:45.599 "state": "completed", 00:18:45.599 "digest": "sha512", 00:18:45.599 "dhgroup": "ffdhe4096" 00:18:45.599 } 00:18:45.599 } 00:18:45.599 ]' 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.859 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.118 23:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.688 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.948 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.209 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.209 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.470 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.470 23:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.470 { 00:18:47.470 "cntlid": 125, 00:18:47.470 "qid": 0, 00:18:47.470 "state": "enabled", 00:18:47.470 "thread": "nvmf_tgt_poll_group_000", 00:18:47.470 "listen_address": { 00:18:47.470 "trtype": "TCP", 00:18:47.470 "adrfam": "IPv4", 00:18:47.470 "traddr": "10.0.0.2", 00:18:47.470 "trsvcid": "4420" 00:18:47.470 }, 00:18:47.470 "peer_address": { 00:18:47.470 "trtype": "TCP", 00:18:47.470 "adrfam": "IPv4", 00:18:47.470 "traddr": "10.0.0.1", 00:18:47.470 "trsvcid": "49344" 00:18:47.470 }, 00:18:47.470 "auth": { 00:18:47.470 "state": "completed", 00:18:47.470 "digest": "sha512", 00:18:47.470 "dhgroup": "ffdhe4096" 00:18:47.470 } 00:18:47.470 } 00:18:47.470 ]' 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.470 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.731 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:48.302 23:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.302 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.562 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.563 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.823 00:18:48.823 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.823 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.823 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.084 { 00:18:49.084 "cntlid": 127, 00:18:49.084 "qid": 0, 00:18:49.084 "state": "enabled", 00:18:49.084 "thread": "nvmf_tgt_poll_group_000", 00:18:49.084 "listen_address": { 00:18:49.084 "trtype": "TCP", 00:18:49.084 "adrfam": "IPv4", 00:18:49.084 "traddr": "10.0.0.2", 00:18:49.084 "trsvcid": "4420" 00:18:49.084 }, 00:18:49.084 "peer_address": { 00:18:49.084 "trtype": "TCP", 00:18:49.084 "adrfam": "IPv4", 00:18:49.084 "traddr": "10.0.0.1", 00:18:49.084 "trsvcid": "49376" 00:18:49.084 }, 00:18:49.084 "auth": { 00:18:49.084 "state": "completed", 00:18:49.084 "digest": "sha512", 00:18:49.084 "dhgroup": "ffdhe4096" 00:18:49.084 } 00:18:49.084 } 00:18:49.084 ]' 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.084 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.345 23:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:49.916 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.176 23:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.436 00:18:50.436 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.436 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.436 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.695 { 00:18:50.695 "cntlid": 129, 00:18:50.695 "qid": 0, 00:18:50.695 "state": "enabled", 00:18:50.695 "thread": "nvmf_tgt_poll_group_000", 00:18:50.695 "listen_address": { 00:18:50.695 "trtype": "TCP", 00:18:50.695 "adrfam": "IPv4", 00:18:50.695 "traddr": "10.0.0.2", 00:18:50.695 "trsvcid": "4420" 00:18:50.695 }, 00:18:50.695 "peer_address": { 00:18:50.695 "trtype": "TCP", 00:18:50.695 "adrfam": "IPv4", 00:18:50.695 "traddr": "10.0.0.1", 00:18:50.695 "trsvcid": "49404" 00:18:50.695 }, 00:18:50.695 "auth": { 00:18:50.695 "state": "completed", 00:18:50.695 "digest": "sha512", 00:18:50.695 "dhgroup": "ffdhe6144" 00:18:50.695 } 00:18:50.695 } 00:18:50.695 ]' 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.695 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.954 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.954 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.954 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.954 23:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.893 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.154 00:18:52.154 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.154 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.154 23:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.426 { 00:18:52.426 "cntlid": 131, 00:18:52.426 "qid": 0, 00:18:52.426 "state": "enabled", 00:18:52.426 "thread": "nvmf_tgt_poll_group_000", 00:18:52.426 "listen_address": { 00:18:52.426 "trtype": "TCP", 00:18:52.426 "adrfam": "IPv4", 00:18:52.426 "traddr": "10.0.0.2", 00:18:52.426 "trsvcid": "4420" 00:18:52.426 }, 00:18:52.426 "peer_address": { 00:18:52.426 "trtype": "TCP", 00:18:52.426 "adrfam": "IPv4", 00:18:52.426 "traddr": "10.0.0.1", 00:18:52.426 "trsvcid": "49436" 00:18:52.426 }, 00:18:52.426 "auth": { 00:18:52.426 "state": "completed", 00:18:52.426 "digest": "sha512", 00:18:52.426 "dhgroup": "ffdhe6144" 00:18:52.426 } 00:18:52.426 } 00:18:52.426 ]' 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.426 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.686 23:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.284 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.543 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.803 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.063 { 00:18:54.063 "cntlid": 133, 00:18:54.063 "qid": 0, 00:18:54.063 "state": "enabled", 00:18:54.063 "thread": "nvmf_tgt_poll_group_000", 00:18:54.063 "listen_address": { 00:18:54.063 "trtype": "TCP", 00:18:54.063 "adrfam": "IPv4", 00:18:54.063 "traddr": "10.0.0.2", 00:18:54.063 "trsvcid": "4420" 00:18:54.063 }, 00:18:54.063 "peer_address": { 00:18:54.063 "trtype": "TCP", 00:18:54.063 "adrfam": "IPv4", 00:18:54.063 "traddr": "10.0.0.1", 00:18:54.063 "trsvcid": "49464" 00:18:54.063 }, 00:18:54.063 "auth": { 00:18:54.063 "state": "completed", 00:18:54.063 "digest": "sha512", 00:18:54.063 "dhgroup": "ffdhe6144" 00:18:54.063 } 00:18:54.063 } 00:18:54.063 ]' 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.063 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.323 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.323 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.323 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.323 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.323 23:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.323 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.263 23:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.263 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.263 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.834 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.834 { 00:18:55.834 "cntlid": 135, 00:18:55.834 "qid": 0, 00:18:55.834 "state": "enabled", 00:18:55.834 "thread": "nvmf_tgt_poll_group_000", 00:18:55.834 "listen_address": { 00:18:55.834 "trtype": "TCP", 00:18:55.834 "adrfam": "IPv4", 00:18:55.834 "traddr": "10.0.0.2", 00:18:55.834 "trsvcid": "4420" 00:18:55.834 }, 00:18:55.834 "peer_address": { 00:18:55.834 "trtype": "TCP", 00:18:55.834 "adrfam": "IPv4", 00:18:55.834 "traddr": "10.0.0.1", 00:18:55.834 "trsvcid": "49480" 00:18:55.834 }, 00:18:55.834 "auth": { 00:18:55.834 "state": "completed", 00:18:55.834 "digest": "sha512", 00:18:55.834 "dhgroup": "ffdhe6144" 00:18:55.834 } 00:18:55.834 } 00:18:55.834 ]' 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.834 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.094 23:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.035 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.036 23:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.607 00:18:57.607 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.607 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.607 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.868 { 00:18:57.868 "cntlid": 137, 00:18:57.868 "qid": 0, 00:18:57.868 "state": "enabled", 00:18:57.868 "thread": "nvmf_tgt_poll_group_000", 00:18:57.868 "listen_address": { 00:18:57.868 "trtype": "TCP", 00:18:57.868 "adrfam": "IPv4", 00:18:57.868 "traddr": "10.0.0.2", 00:18:57.868 "trsvcid": "4420" 00:18:57.868 }, 00:18:57.868 "peer_address": { 00:18:57.868 "trtype": "TCP", 00:18:57.868 "adrfam": "IPv4", 00:18:57.868 "traddr": "10.0.0.1", 00:18:57.868 "trsvcid": "44260" 00:18:57.868 }, 00:18:57.868 "auth": { 00:18:57.868 "state": "completed", 00:18:57.868 "digest": "sha512", 00:18:57.868 "dhgroup": "ffdhe8192" 00:18:57.868 } 00:18:57.868 } 00:18:57.868 ]' 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.868 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.129 23:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:18:58.700 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.700 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:58.700 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.700 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.701 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.701 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.701 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.701 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.961 23:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.531 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.531 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.792 { 00:18:59.792 "cntlid": 139, 00:18:59.792 "qid": 0, 00:18:59.792 "state": "enabled", 00:18:59.792 "thread": "nvmf_tgt_poll_group_000", 00:18:59.792 "listen_address": { 00:18:59.792 "trtype": "TCP", 00:18:59.792 "adrfam": "IPv4", 00:18:59.792 "traddr": "10.0.0.2", 00:18:59.792 "trsvcid": "4420" 00:18:59.792 }, 00:18:59.792 "peer_address": { 00:18:59.792 "trtype": "TCP", 00:18:59.792 "adrfam": "IPv4", 00:18:59.792 "traddr": "10.0.0.1", 00:18:59.792 "trsvcid": "44292" 00:18:59.792 }, 00:18:59.792 "auth": { 00:18:59.792 "state": "completed", 00:18:59.792 "digest": "sha512", 00:18:59.792 "dhgroup": "ffdhe8192" 00:18:59.792 } 00:18:59.792 } 00:18:59.792 ]' 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.792 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.053 23:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:YjQ1ZDdjODMwMTExOGRkNmM3MTRkNjE3NmE4NjRjMTRSZFkv: --dhchap-ctrl-secret DHHC-1:02:MDFiODEwZmUzNDNmMzA0NDhkNWI2ZDA0YWU5MzYyM2M1MzczNjU5NDAxMThkZDk3xKaCgg==: 00:19:00.622 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.622 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:00.622 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.622 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.622 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.623 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.623 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.623 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.883 23:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.453 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.453 { 00:19:01.453 "cntlid": 141, 00:19:01.453 "qid": 0, 00:19:01.453 "state": "enabled", 00:19:01.453 "thread": "nvmf_tgt_poll_group_000", 00:19:01.453 "listen_address": { 00:19:01.453 "trtype": "TCP", 00:19:01.453 "adrfam": "IPv4", 00:19:01.453 "traddr": "10.0.0.2", 00:19:01.453 "trsvcid": "4420" 00:19:01.453 }, 00:19:01.453 "peer_address": { 00:19:01.453 "trtype": "TCP", 00:19:01.453 "adrfam": "IPv4", 00:19:01.453 "traddr": "10.0.0.1", 00:19:01.453 "trsvcid": "44320" 00:19:01.453 }, 00:19:01.453 "auth": { 00:19:01.453 "state": "completed", 00:19:01.453 "digest": "sha512", 00:19:01.453 "dhgroup": "ffdhe8192" 00:19:01.453 } 00:19:01.453 } 00:19:01.453 ]' 00:19:01.453 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.714 23:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjBmZGMyMGU1NTI4M2EzMjIxNTZiYjAxOWVmZDhhMDQzYWQ0ZjRlNjVmOWFlOTkyddIWdA==: --dhchap-ctrl-secret DHHC-1:01:YjM2OGQyMGYxNzFlNzA2NWQwNDYxZDNjZDI1MGEyMTFR/6t2: 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.655 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.225 00:19:03.225 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.225 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.225 23:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.486 { 00:19:03.486 "cntlid": 143, 00:19:03.486 "qid": 0, 00:19:03.486 "state": "enabled", 00:19:03.486 "thread": "nvmf_tgt_poll_group_000", 00:19:03.486 "listen_address": { 00:19:03.486 "trtype": "TCP", 00:19:03.486 "adrfam": "IPv4", 00:19:03.486 "traddr": "10.0.0.2", 00:19:03.486 "trsvcid": "4420" 00:19:03.486 }, 00:19:03.486 "peer_address": { 00:19:03.486 "trtype": "TCP", 00:19:03.486 "adrfam": "IPv4", 00:19:03.486 "traddr": "10.0.0.1", 00:19:03.486 "trsvcid": "44342" 00:19:03.486 }, 00:19:03.486 "auth": { 00:19:03.486 "state": "completed", 00:19:03.486 "digest": "sha512", 00:19:03.486 "dhgroup": "ffdhe8192" 00:19:03.486 } 00:19:03.486 } 00:19:03.486 ]' 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.486 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.746 23:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.687 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.258 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.258 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.259 23:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.259 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.259 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.259 { 00:19:05.259 "cntlid": 145, 00:19:05.259 "qid": 0, 00:19:05.259 "state": "enabled", 00:19:05.259 "thread": "nvmf_tgt_poll_group_000", 00:19:05.259 "listen_address": { 00:19:05.259 "trtype": "TCP", 00:19:05.259 "adrfam": "IPv4", 00:19:05.259 "traddr": "10.0.0.2", 00:19:05.259 "trsvcid": "4420" 00:19:05.259 }, 00:19:05.259 "peer_address": { 00:19:05.259 "trtype": "TCP", 00:19:05.259 "adrfam": "IPv4", 00:19:05.259 "traddr": "10.0.0.1", 00:19:05.259 "trsvcid": "44372" 00:19:05.259 }, 00:19:05.259 "auth": { 00:19:05.259 "state": "completed", 00:19:05.259 "digest": "sha512", 00:19:05.259 "dhgroup": "ffdhe8192" 00:19:05.259 } 00:19:05.259 } 00:19:05.259 ]' 00:19:05.259 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.519 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:NTM2M2EyMDhmZjY5N2U0MGUwOTEyZWY3Y2Y2ZmUxNzc4ZjU0NGRkM2ZkMDQ5YTA2s/bLvQ==: --dhchap-ctrl-secret DHHC-1:03:NWJmNGQ3MDZjZjgyZTFkOTQzZTVmODk5YWNkZjE5YzM3MWQ2MzllNTVmNzA2MTk5MWZjMDE2N2ZiNWE0ZmY4ZfryTDs=: 00:19:06.461 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.461 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:06.461 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.461 23:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.461 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:06.722 request: 00:19:06.722 { 00:19:06.722 "name": "nvme0", 00:19:06.722 "trtype": "tcp", 00:19:06.722 "traddr": "10.0.0.2", 00:19:06.722 "adrfam": "ipv4", 00:19:06.722 "trsvcid": "4420", 00:19:06.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:06.722 "prchk_reftag": false, 00:19:06.722 "prchk_guard": false, 00:19:06.722 "hdgst": false, 00:19:06.722 "ddgst": false, 00:19:06.722 "dhchap_key": "key2", 00:19:06.722 "method": "bdev_nvme_attach_controller", 00:19:06.722 "req_id": 1 00:19:06.722 } 00:19:06.722 Got JSON-RPC error response 00:19:06.722 response: 00:19:06.722 { 00:19:06.722 "code": -5, 00:19:06.722 "message": "Input/output error" 00:19:06.722 } 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.722 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:06.983 23:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:07.244 request: 00:19:07.244 { 00:19:07.244 "name": "nvme0", 00:19:07.244 "trtype": "tcp", 00:19:07.244 "traddr": "10.0.0.2", 00:19:07.244 "adrfam": "ipv4", 00:19:07.244 "trsvcid": "4420", 00:19:07.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:07.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:07.244 "prchk_reftag": false, 00:19:07.244 "prchk_guard": false, 00:19:07.244 "hdgst": false, 00:19:07.244 "ddgst": false, 00:19:07.244 "dhchap_key": "key1", 00:19:07.244 "dhchap_ctrlr_key": "ckey2", 00:19:07.244 "method": "bdev_nvme_attach_controller", 00:19:07.244 "req_id": 1 00:19:07.244 } 00:19:07.244 Got JSON-RPC error response 00:19:07.244 response: 00:19:07.244 { 00:19:07.244 "code": -5, 00:19:07.244 "message": "Input/output error" 00:19:07.244 } 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.244 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.504 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.505 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.765 request: 00:19:07.765 { 00:19:07.765 "name": "nvme0", 00:19:07.765 "trtype": "tcp", 00:19:07.765 "traddr": "10.0.0.2", 00:19:07.765 "adrfam": "ipv4", 00:19:07.765 "trsvcid": "4420", 00:19:07.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:07.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:07.765 "prchk_reftag": false, 00:19:07.765 "prchk_guard": false, 00:19:07.765 "hdgst": false, 00:19:07.765 "ddgst": false, 00:19:07.765 "dhchap_key": "key1", 00:19:07.765 "dhchap_ctrlr_key": "ckey1", 00:19:07.765 "method": "bdev_nvme_attach_controller", 00:19:07.765 "req_id": 1 00:19:07.765 } 00:19:07.765 Got JSON-RPC error response 00:19:07.765 response: 00:19:07.765 { 00:19:07.765 "code": -5, 00:19:07.765 "message": "Input/output error" 00:19:07.765 } 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 839640 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 839640 ']' 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 839640 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.765 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 839640 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 839640' 00:19:08.027 killing process with pid 839640 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 839640 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 839640 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=865791 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 865791 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 865791 ']' 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.027 23:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 865791 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 865791 ']' 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.970 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.971 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.231 23:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.870 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.870 { 00:19:09.870 "cntlid": 1, 00:19:09.870 "qid": 0, 00:19:09.870 "state": "enabled", 00:19:09.870 "thread": "nvmf_tgt_poll_group_000", 00:19:09.870 "listen_address": { 00:19:09.870 "trtype": "TCP", 00:19:09.870 "adrfam": "IPv4", 00:19:09.870 "traddr": "10.0.0.2", 00:19:09.870 "trsvcid": "4420" 00:19:09.870 }, 00:19:09.870 "peer_address": { 00:19:09.870 "trtype": "TCP", 00:19:09.870 "adrfam": "IPv4", 00:19:09.870 "traddr": "10.0.0.1", 00:19:09.870 "trsvcid": "52734" 00:19:09.870 }, 00:19:09.870 "auth": { 00:19:09.870 "state": "completed", 00:19:09.870 "digest": "sha512", 00:19:09.870 "dhgroup": "ffdhe8192" 00:19:09.870 } 00:19:09.870 } 00:19:09.870 ]' 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.870 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.131 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.131 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.131 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.131 23:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:OTlkNzRjZTk4YjNiMWIyODljMDZjZTc3OWZiNzA1NzZhNThiYjNlM2JjMjFiY2I0YjY2ODlmZTJiMzI0ZmZjOR/2D94=: 00:19:11.073 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.073 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.073 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.073 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.073 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.074 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.335 request: 00:19:11.335 { 00:19:11.335 "name": "nvme0", 00:19:11.335 "trtype": "tcp", 00:19:11.335 "traddr": "10.0.0.2", 00:19:11.335 "adrfam": "ipv4", 00:19:11.335 "trsvcid": "4420", 00:19:11.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:11.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:11.335 "prchk_reftag": false, 00:19:11.335 "prchk_guard": false, 00:19:11.335 "hdgst": false, 00:19:11.335 "ddgst": false, 00:19:11.335 "dhchap_key": "key3", 00:19:11.335 "method": "bdev_nvme_attach_controller", 00:19:11.335 "req_id": 1 00:19:11.335 } 00:19:11.335 Got JSON-RPC error response 00:19:11.335 response: 00:19:11.335 { 00:19:11.335 "code": -5, 00:19:11.335 "message": "Input/output error" 00:19:11.335 } 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:11.335 23:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.335 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.596 request: 00:19:11.596 { 00:19:11.596 "name": "nvme0", 00:19:11.596 "trtype": "tcp", 00:19:11.596 "traddr": "10.0.0.2", 00:19:11.596 "adrfam": "ipv4", 00:19:11.596 "trsvcid": "4420", 00:19:11.596 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:11.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:11.596 "prchk_reftag": false, 00:19:11.596 "prchk_guard": false, 00:19:11.596 "hdgst": false, 00:19:11.596 "ddgst": false, 00:19:11.596 "dhchap_key": "key3", 00:19:11.596 "method": "bdev_nvme_attach_controller", 00:19:11.596 "req_id": 1 00:19:11.596 } 00:19:11.596 Got JSON-RPC error response 00:19:11.596 response: 00:19:11.596 { 00:19:11.596 "code": -5, 00:19:11.596 "message": "Input/output error" 00:19:11.596 } 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:11.596 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:11.857 request: 00:19:11.857 { 00:19:11.857 "name": "nvme0", 00:19:11.857 "trtype": "tcp", 00:19:11.857 "traddr": "10.0.0.2", 00:19:11.857 "adrfam": "ipv4", 00:19:11.857 "trsvcid": "4420", 00:19:11.857 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:11.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:19:11.857 "prchk_reftag": false, 00:19:11.857 "prchk_guard": false, 00:19:11.857 "hdgst": false, 00:19:11.857 "ddgst": false, 00:19:11.857 "dhchap_key": "key0", 00:19:11.857 "dhchap_ctrlr_key": "key1", 00:19:11.857 "method": "bdev_nvme_attach_controller", 00:19:11.857 "req_id": 1 00:19:11.857 } 00:19:11.857 Got JSON-RPC error response 00:19:11.857 response: 00:19:11.857 { 00:19:11.857 "code": -5, 00:19:11.857 "message": "Input/output error" 00:19:11.857 } 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:11.857 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:12.118 00:19:12.118 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:12.118 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:12.118 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.378 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.378 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.378 23:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 839867 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 839867 ']' 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 839867 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.378 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 839867 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 839867' 00:19:12.639 killing process with pid 839867 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 839867 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 839867 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.639 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.639 rmmod nvme_tcp 00:19:12.639 rmmod nvme_fabrics 00:19:12.639 rmmod nvme_keyring 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 865791 ']' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 865791 ']' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865791' 00:19:12.900 killing process with pid 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 865791 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.900 23:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.a0A /tmp/spdk.key-sha256.n62 /tmp/spdk.key-sha384.gE8 /tmp/spdk.key-sha512.yzp /tmp/spdk.key-sha512.056 /tmp/spdk.key-sha384.FLt /tmp/spdk.key-sha256.Uaj '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:15.446 00:19:15.446 real 2m21.852s 00:19:15.446 user 5m13.802s 00:19:15.446 sys 0m19.940s 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.446 ************************************ 00:19:15.446 END TEST nvmf_auth_target 00:19:15.446 ************************************ 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.446 ************************************ 00:19:15.446 START TEST nvmf_bdevio_no_huge 00:19:15.446 ************************************ 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:15.446 * Looking for test storage... 00:19:15.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.446 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.447 23:07:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:23.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:23.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:23.589 Found net devices under 0000:31:00.0: cvl_0_0 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:23.589 Found net devices under 0000:31:00.1: cvl_0_1 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.589 23:07:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.589 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.589 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.589 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:19:23.589 00:19:23.589 --- 10.0.0.2 ping statistics --- 00:19:23.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.589 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:19:23.589 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:19:23.589 00:19:23.589 --- 10.0.0.1 ping statistics --- 00:19:23.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.590 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=871524 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 871524 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 871524 ']' 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.590 23:07:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.590 [2024-07-24 23:07:41.230429] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:23.590 [2024-07-24 23:07:41.230506] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:23.590 [2024-07-24 23:07:41.330409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.850 [2024-07-24 23:07:41.436338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.850 [2024-07-24 23:07:41.436392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.850 [2024-07-24 23:07:41.436401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.850 [2024-07-24 23:07:41.436408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.850 [2024-07-24 23:07:41.436414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.851 [2024-07-24 23:07:41.436585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.851 [2024-07-24 23:07:41.436715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:23.851 [2024-07-24 23:07:41.437044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.851 [2024-07-24 23:07:41.436849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 [2024-07-24 23:07:42.066717] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 Malloc0 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.422 [2024-07-24 23:07:42.120320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:24.422 { 00:19:24.422 "params": { 00:19:24.422 "name": "Nvme$subsystem", 00:19:24.422 "trtype": "$TEST_TRANSPORT", 00:19:24.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.422 "adrfam": "ipv4", 00:19:24.422 "trsvcid": "$NVMF_PORT", 00:19:24.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.422 "hdgst": ${hdgst:-false}, 00:19:24.422 "ddgst": ${ddgst:-false} 00:19:24.422 }, 00:19:24.422 "method": "bdev_nvme_attach_controller" 00:19:24.422 } 00:19:24.422 EOF 00:19:24.422 )") 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:24.422 23:07:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:24.422 "params": { 00:19:24.423 "name": "Nvme1", 00:19:24.423 "trtype": "tcp", 00:19:24.423 "traddr": "10.0.0.2", 00:19:24.423 "adrfam": "ipv4", 00:19:24.423 "trsvcid": "4420", 00:19:24.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.423 "hdgst": false, 00:19:24.423 "ddgst": false 00:19:24.423 }, 00:19:24.423 "method": "bdev_nvme_attach_controller" 00:19:24.423 }' 00:19:24.423 [2024-07-24 23:07:42.181931] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:24.423 [2024-07-24 23:07:42.181994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid871649 ] 00:19:24.683 [2024-07-24 23:07:42.256641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.683 [2024-07-24 23:07:42.353360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.683 [2024-07-24 23:07:42.353478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.683 [2024-07-24 23:07:42.353480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.943 I/O targets: 00:19:24.943 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:24.943 00:19:24.943 00:19:24.943 CUnit - A unit testing framework for C - Version 2.1-3 00:19:24.943 http://cunit.sourceforge.net/ 00:19:24.943 00:19:24.943 00:19:24.943 Suite: bdevio tests on: Nvme1n1 00:19:24.943 Test: blockdev write read block ...passed 00:19:25.203 Test: blockdev write zeroes read block ...passed 00:19:25.203 Test: blockdev write zeroes read no split ...passed 00:19:25.203 Test: blockdev write zeroes read split ...passed 00:19:25.203 Test: blockdev write zeroes read split partial ...passed 00:19:25.203 Test: blockdev reset ...[2024-07-24 23:07:42.842144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.203 [2024-07-24 23:07:42.842200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b3970 (9): Bad file descriptor 00:19:25.203 [2024-07-24 23:07:42.859952] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.203 passed 00:19:25.203 Test: blockdev write read 8 blocks ...passed 00:19:25.203 Test: blockdev write read size > 128k ...passed 00:19:25.203 Test: blockdev write read invalid size ...passed 00:19:25.203 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.203 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.203 Test: blockdev write read max offset ...passed 00:19:25.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.464 Test: blockdev writev readv 8 blocks ...passed 00:19:25.464 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.464 Test: blockdev writev readv block ...passed 00:19:25.464 Test: blockdev writev readv size > 128k ...passed 00:19:25.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.464 Test: blockdev comparev and writev ...[2024-07-24 23:07:43.128134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.128158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.128169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.128175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.128700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.128708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.128721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.128727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.129266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.129273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.129283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.129288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.129804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.129821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.464 [2024-07-24 23:07:43.129826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.464 passed 00:19:25.464 Test: blockdev nvme passthru rw ...passed 00:19:25.464 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:07:43.214762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.464 [2024-07-24 23:07:43.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.215219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.464 [2024-07-24 23:07:43.215226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.215655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.464 [2024-07-24 23:07:43.215663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.464 [2024-07-24 23:07:43.216099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.464 [2024-07-24 23:07:43.216106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.464 passed 00:19:25.464 Test: blockdev nvme admin passthru ...passed 00:19:25.726 Test: blockdev copy ...passed 00:19:25.726 00:19:25.726 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.726 suites 1 1 n/a 0 0 00:19:25.726 tests 23 23 23 0 0 00:19:25.726 asserts 152 152 152 0 n/a 00:19:25.726 00:19:25.726 Elapsed time = 1.234 seconds 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.987 rmmod nvme_tcp 00:19:25.987 rmmod nvme_fabrics 00:19:25.987 rmmod nvme_keyring 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 871524 ']' 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 871524 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 871524 ']' 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 871524 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 871524 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 871524' 00:19:25.987 killing process with pid 871524 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 871524 00:19:25.987 23:07:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 871524 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.249 23:07:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.794 00:19:28.794 real 0m13.329s 00:19:28.794 user 0m14.894s 00:19:28.794 sys 0m7.077s 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:28.794 ************************************ 00:19:28.794 END TEST nvmf_bdevio_no_huge 00:19:28.794 ************************************ 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.794 ************************************ 00:19:28.794 START TEST nvmf_tls 00:19:28.794 ************************************ 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:28.794 * Looking for test storage... 00:19:28.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.794 23:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:36.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:36.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:36.939 Found net devices under 0000:31:00.0: cvl_0_0 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:36.939 Found net devices under 0000:31:00.1: cvl_0_1 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:19:36.939 00:19:36.939 --- 10.0.0.2 ping statistics --- 00:19:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.939 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:19:36.939 00:19:36.939 --- 10.0.0.1 ping statistics --- 00:19:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.939 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:19:36.939 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=876644 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 876644 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 876644 ']' 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.940 23:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:36.940 [2024-07-24 23:07:54.651201] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:36.940 [2024-07-24 23:07:54.651303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.940 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.201 [2024-07-24 23:07:54.752028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.201 [2024-07-24 23:07:54.843467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.201 [2024-07-24 23:07:54.843527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.201 [2024-07-24 23:07:54.843535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.201 [2024-07-24 23:07:54.843541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.201 [2024-07-24 23:07:54.843547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.201 [2024-07-24 23:07:54.843572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:37.776 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:38.037 true 00:19:38.037 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.037 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:38.037 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:38.037 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:38.037 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:38.298 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.298 23:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:38.559 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:38.559 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:38.559 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:38.559 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.559 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:38.854 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:39.116 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.116 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:39.377 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:39.377 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:39.377 23:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:39.377 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.377 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:39.638 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.mCpMqaNlEw 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2siUzUhHLu 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:39.639 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.mCpMqaNlEw 00:19:39.900 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2siUzUhHLu 00:19:39.900 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:39.900 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:40.161 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.mCpMqaNlEw 00:19:40.161 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mCpMqaNlEw 00:19:40.161 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.422 [2024-07-24 23:07:57.973028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.422 23:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:40.422 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:40.682 [2024-07-24 23:07:58.269746] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.682 [2024-07-24 23:07:58.269955] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.682 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:40.682 malloc0 00:19:40.682 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:40.943 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mCpMqaNlEw 00:19:41.202 [2024-07-24 23:07:58.732788] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.202 23:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mCpMqaNlEw 00:19:41.202 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.201 Initializing NVMe Controllers 00:19:51.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:51.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:51.201 Initialization complete. Launching workers. 00:19:51.201 ======================================================== 00:19:51.201 Latency(us) 00:19:51.201 Device Information : IOPS MiB/s Average min max 00:19:51.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18996.99 74.21 3368.95 1198.06 6670.27 00:19:51.201 ======================================================== 00:19:51.201 Total : 18996.99 74.21 3368.95 1198.06 6670.27 00:19:51.201 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCpMqaNlEw 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mCpMqaNlEw' 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=879509 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 879509 /var/tmp/bdevperf.sock 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 879509 ']' 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.201 23:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.201 [2024-07-24 23:08:08.900810] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:19:51.201 [2024-07-24 23:08:08.900922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879509 ] 00:19:51.201 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.201 [2024-07-24 23:08:08.958810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.461 [2024-07-24 23:08:09.011837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.031 23:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.031 23:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.031 23:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mCpMqaNlEw 00:19:52.031 [2024-07-24 23:08:09.795977] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.031 [2024-07-24 23:08:09.796034] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:52.289 TLSTESTn1 00:19:52.289 23:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:52.290 Running I/O for 10 seconds... 00:20:02.287 00:20:02.287 Latency(us) 00:20:02.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.287 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.287 Verification LBA range: start 0x0 length 0x2000 00:20:02.287 TLSTESTn1 : 10.02 3530.04 13.79 0.00 0.00 36205.65 5625.17 58982.40 00:20:02.287 =================================================================================================================== 00:20:02.287 Total : 3530.04 13.79 0.00 0.00 36205.65 5625.17 58982.40 00:20:02.287 0 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 879509 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 879509 ']' 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 879509 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.287 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 879509 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 879509' 00:20:02.548 killing process with pid 879509 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 879509 00:20:02.548 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.548 00:20:02.548 Latency(us) 00:20:02.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.548 =================================================================================================================== 00:20:02.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.548 [2024-07-24 23:08:20.101905] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 879509 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2siUzUhHLu 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2siUzUhHLu 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2siUzUhHLu 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2siUzUhHLu' 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=881645 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 881645 /var/tmp/bdevperf.sock 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 881645 ']' 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.548 23:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.548 [2024-07-24 23:08:20.266596] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:02.548 [2024-07-24 23:08:20.266650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881645 ] 00:20:02.548 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.548 [2024-07-24 23:08:20.322397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.809 [2024-07-24 23:08:20.373324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.382 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.382 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:03.382 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2siUzUhHLu 00:20:03.643 [2024-07-24 23:08:21.185641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.643 [2024-07-24 23:08:21.185708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:03.643 [2024-07-24 23:08:21.190216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:03.643 [2024-07-24 23:08:21.190889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0ad80 (107): Transport endpoint is not connected 00:20:03.643 [2024-07-24 23:08:21.191884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0ad80 (9): Bad file descriptor 00:20:03.643 [2024-07-24 23:08:21.192886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.643 [2024-07-24 23:08:21.192892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:03.643 [2024-07-24 23:08:21.192899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.643 request: 00:20:03.643 { 00:20:03.643 "name": "TLSTEST", 00:20:03.643 "trtype": "tcp", 00:20:03.643 "traddr": "10.0.0.2", 00:20:03.643 "adrfam": "ipv4", 00:20:03.643 "trsvcid": "4420", 00:20:03.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.643 "prchk_reftag": false, 00:20:03.643 "prchk_guard": false, 00:20:03.643 "hdgst": false, 00:20:03.643 "ddgst": false, 00:20:03.643 "psk": "/tmp/tmp.2siUzUhHLu", 00:20:03.643 "method": "bdev_nvme_attach_controller", 00:20:03.643 "req_id": 1 00:20:03.643 } 00:20:03.643 Got JSON-RPC error response 00:20:03.643 response: 00:20:03.643 { 00:20:03.643 "code": -5, 00:20:03.643 "message": "Input/output error" 00:20:03.643 } 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 881645 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 881645 ']' 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 881645 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 881645 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 881645' 00:20:03.643 killing process with pid 881645 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 881645 00:20:03.643 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.643 00:20:03.643 Latency(us) 00:20:03.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.643 =================================================================================================================== 00:20:03.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.643 [2024-07-24 23:08:21.275658] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 881645 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.643 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mCpMqaNlEw 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mCpMqaNlEw 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mCpMqaNlEw 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mCpMqaNlEw' 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=881983 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 881983 /var/tmp/bdevperf.sock 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 881983 ']' 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.644 23:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.904 [2024-07-24 23:08:21.440622] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:03.904 [2024-07-24 23:08:21.440681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881983 ] 00:20:03.904 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.904 [2024-07-24 23:08:21.496394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.904 [2024-07-24 23:08:21.547412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.475 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.475 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.475 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.mCpMqaNlEw 00:20:04.736 [2024-07-24 23:08:22.351669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.736 [2024-07-24 23:08:22.351731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:04.736 [2024-07-24 23:08:22.361121] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:04.736 [2024-07-24 23:08:22.361140] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:04.736 [2024-07-24 23:08:22.361158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:04.736 [2024-07-24 23:08:22.361730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e0d80 (107): Transport endpoint is not connected 00:20:04.736 [2024-07-24 23:08:22.362725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e0d80 (9): Bad file descriptor 00:20:04.736 [2024-07-24 23:08:22.363726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.736 [2024-07-24 23:08:22.363733] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:04.736 [2024-07-24 23:08:22.363741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.736 request: 00:20:04.736 { 00:20:04.736 "name": "TLSTEST", 00:20:04.736 "trtype": "tcp", 00:20:04.736 "traddr": "10.0.0.2", 00:20:04.736 "adrfam": "ipv4", 00:20:04.736 "trsvcid": "4420", 00:20:04.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.736 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:04.736 "prchk_reftag": false, 00:20:04.736 "prchk_guard": false, 00:20:04.736 "hdgst": false, 00:20:04.736 "ddgst": false, 00:20:04.736 "psk": "/tmp/tmp.mCpMqaNlEw", 00:20:04.736 "method": "bdev_nvme_attach_controller", 00:20:04.736 "req_id": 1 00:20:04.736 } 00:20:04.736 Got JSON-RPC error response 00:20:04.736 response: 00:20:04.736 { 00:20:04.736 "code": -5, 00:20:04.736 "message": "Input/output error" 00:20:04.736 } 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 881983 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 881983 ']' 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 881983 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 881983 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 881983' 00:20:04.736 killing process with pid 881983 00:20:04.736 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 881983 00:20:04.736 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.736 00:20:04.737 Latency(us) 00:20:04.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.737 =================================================================================================================== 00:20:04.737 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.737 [2024-07-24 23:08:22.445564] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.737 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 881983 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCpMqaNlEw 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCpMqaNlEw 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCpMqaNlEw 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mCpMqaNlEw' 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.998 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=882148 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 882148 /var/tmp/bdevperf.sock 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 882148 ']' 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.999 23:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.999 [2024-07-24 23:08:22.602347] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:04.999 [2024-07-24 23:08:22.602401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882148 ] 00:20:04.999 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.999 [2024-07-24 23:08:22.658620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.999 [2024-07-24 23:08:22.710681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.939 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.939 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:05.939 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mCpMqaNlEw 00:20:05.939 [2024-07-24 23:08:23.515024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.939 [2024-07-24 23:08:23.515089] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:05.939 [2024-07-24 23:08:23.523939] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:05.939 [2024-07-24 23:08:23.523955] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:05.939 [2024-07-24 23:08:23.523974] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.940 [2024-07-24 23:08:23.524182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b8d80 (107): Transport endpoint is not connected 00:20:05.940 [2024-07-24 23:08:23.525165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b8d80 (9): Bad file descriptor 00:20:05.940 [2024-07-24 23:08:23.526167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:05.940 [2024-07-24 23:08:23.526173] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.940 [2024-07-24 23:08:23.526179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:05.940 request: 00:20:05.940 { 00:20:05.940 "name": "TLSTEST", 00:20:05.940 "trtype": "tcp", 00:20:05.940 "traddr": "10.0.0.2", 00:20:05.940 "adrfam": "ipv4", 00:20:05.940 "trsvcid": "4420", 00:20:05.940 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:05.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.940 "prchk_reftag": false, 00:20:05.940 "prchk_guard": false, 00:20:05.940 "hdgst": false, 00:20:05.940 "ddgst": false, 00:20:05.940 "psk": "/tmp/tmp.mCpMqaNlEw", 00:20:05.940 "method": "bdev_nvme_attach_controller", 00:20:05.940 "req_id": 1 00:20:05.940 } 00:20:05.940 Got JSON-RPC error response 00:20:05.940 response: 00:20:05.940 { 00:20:05.940 "code": -5, 00:20:05.940 "message": "Input/output error" 00:20:05.940 } 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 882148 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 882148 ']' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 882148 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882148 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882148' 00:20:05.940 killing process with pid 882148 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 882148 00:20:05.940 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.940 00:20:05.940 Latency(us) 00:20:05.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.940 =================================================================================================================== 00:20:05.940 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.940 [2024-07-24 23:08:23.609223] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 882148 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=882349 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 882349 /var/tmp/bdevperf.sock 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 882349 ']' 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.940 23:08:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.200 [2024-07-24 23:08:23.767071] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:06.201 [2024-07-24 23:08:23.767129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882349 ] 00:20:06.201 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.201 [2024-07-24 23:08:23.821915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.201 [2024-07-24 23:08:23.874336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.772 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.772 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.772 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.033 [2024-07-24 23:08:24.696814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.033 [2024-07-24 23:08:24.698794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1348460 (9): Bad file descriptor 00:20:07.033 [2024-07-24 23:08:24.699795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.033 [2024-07-24 23:08:24.699802] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.033 [2024-07-24 23:08:24.699809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.033 request: 00:20:07.033 { 00:20:07.033 "name": "TLSTEST", 00:20:07.033 "trtype": "tcp", 00:20:07.033 "traddr": "10.0.0.2", 00:20:07.033 "adrfam": "ipv4", 00:20:07.033 "trsvcid": "4420", 00:20:07.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.033 "prchk_reftag": false, 00:20:07.033 "prchk_guard": false, 00:20:07.033 "hdgst": false, 00:20:07.033 "ddgst": false, 00:20:07.033 "method": "bdev_nvme_attach_controller", 00:20:07.033 "req_id": 1 00:20:07.033 } 00:20:07.033 Got JSON-RPC error response 00:20:07.033 response: 00:20:07.033 { 00:20:07.033 "code": -5, 00:20:07.033 "message": "Input/output error" 00:20:07.033 } 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 882349 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 882349 ']' 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 882349 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882349 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882349' 00:20:07.033 killing process with pid 882349 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 882349 00:20:07.033 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.033 00:20:07.033 Latency(us) 00:20:07.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.033 =================================================================================================================== 00:20:07.033 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.033 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 882349 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 876644 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 876644 ']' 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 876644 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 876644 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 876644' 00:20:07.294 killing process with pid 876644 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 876644 00:20:07.294 [2024-07-24 23:08:24.950283] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:07.294 23:08:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 876644 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:07.294 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.VeAytykyZb 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.VeAytykyZb 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=882699 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 882699 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 882699 ']' 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.555 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.555 [2024-07-24 23:08:25.188041] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:07.555 [2024-07-24 23:08:25.188127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.555 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.555 [2024-07-24 23:08:25.283565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.555 [2024-07-24 23:08:25.341957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.816 [2024-07-24 23:08:25.341990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.816 [2024-07-24 23:08:25.341996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.816 [2024-07-24 23:08:25.342002] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.816 [2024-07-24 23:08:25.342007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.816 [2024-07-24 23:08:25.342023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.VeAytykyZb 00:20:08.387 23:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.387 [2024-07-24 23:08:26.132217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.387 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:08.648 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:08.648 [2024-07-24 23:08:26.432951] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.648 [2024-07-24 23:08:26.433136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.949 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.949 malloc0 00:20:08.949 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:09.211 [2024-07-24 23:08:26.855578] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VeAytykyZb 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VeAytykyZb' 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=883062 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 883062 /var/tmp/bdevperf.sock 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 883062 ']' 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.211 23:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.211 [2024-07-24 23:08:26.903520] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:09.211 [2024-07-24 23:08:26.903570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883062 ] 00:20:09.211 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.211 [2024-07-24 23:08:26.964479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.472 [2024-07-24 23:08:27.016726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.043 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.043 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.043 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:10.304 [2024-07-24 23:08:27.836961] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.304 [2024-07-24 23:08:27.837020] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:10.304 TLSTESTn1 00:20:10.304 23:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:10.304 Running I/O for 10 seconds... 00:20:20.303 00:20:20.303 Latency(us) 00:20:20.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.303 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.303 Verification LBA range: start 0x0 length 0x2000 00:20:20.303 TLSTESTn1 : 10.03 4429.19 17.30 0.00 0.00 28847.32 5980.16 56360.96 00:20:20.303 =================================================================================================================== 00:20:20.303 Total : 4429.19 17.30 0.00 0.00 28847.32 5980.16 56360.96 00:20:20.303 0 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 883062 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 883062 ']' 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 883062 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 883062 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 883062' 00:20:20.564 killing process with pid 883062 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 883062 00:20:20.564 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.564 00:20:20.564 Latency(us) 00:20:20.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.564 =================================================================================================================== 00:20:20.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.564 [2024-07-24 23:08:38.163102] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 883062 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.VeAytykyZb 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VeAytykyZb 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VeAytykyZb 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VeAytykyZb 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VeAytykyZb' 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=885262 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 885262 /var/tmp/bdevperf.sock 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 885262 ']' 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.564 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.565 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.565 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.565 23:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.565 [2024-07-24 23:08:38.332139] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:20.565 [2024-07-24 23:08:38.332194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885262 ] 00:20:20.825 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.825 [2024-07-24 23:08:38.387826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.825 [2024-07-24 23:08:38.440245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.396 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.396 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.396 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:21.657 [2024-07-24 23:08:39.240691] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.657 [2024-07-24 23:08:39.240736] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:21.657 [2024-07-24 23:08:39.240742] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.VeAytykyZb 00:20:21.657 request: 00:20:21.657 { 00:20:21.657 "name": "TLSTEST", 00:20:21.657 "trtype": "tcp", 00:20:21.657 "traddr": "10.0.0.2", 00:20:21.657 "adrfam": "ipv4", 00:20:21.657 "trsvcid": "4420", 00:20:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.657 "prchk_reftag": false, 00:20:21.657 "prchk_guard": false, 00:20:21.657 "hdgst": false, 00:20:21.657 "ddgst": false, 00:20:21.657 "psk": "/tmp/tmp.VeAytykyZb", 00:20:21.657 "method": "bdev_nvme_attach_controller", 00:20:21.657 "req_id": 1 00:20:21.657 } 00:20:21.657 Got JSON-RPC error response 00:20:21.657 response: 00:20:21.657 { 00:20:21.657 "code": -1, 00:20:21.657 "message": "Operation not permitted" 00:20:21.657 } 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 885262 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 885262 ']' 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 885262 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885262 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885262' 00:20:21.657 killing process with pid 885262 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 885262 00:20:21.657 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.657 00:20:21.657 Latency(us) 00:20:21.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.657 =================================================================================================================== 00:20:21.657 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 885262 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 882699 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 882699 ']' 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 882699 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.657 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882699 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882699' 00:20:21.918 killing process with pid 882699 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 882699 00:20:21.918 [2024-07-24 23:08:39.485922] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 882699 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=885441 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 885441 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 885441 ']' 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.918 23:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.918 [2024-07-24 23:08:39.679867] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:21.918 [2024-07-24 23:08:39.679922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.179 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.179 [2024-07-24 23:08:39.765857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.179 [2024-07-24 23:08:39.819769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.179 [2024-07-24 23:08:39.819801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.179 [2024-07-24 23:08:39.819807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.179 [2024-07-24 23:08:39.819811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.179 [2024-07-24 23:08:39.819815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.179 [2024-07-24 23:08:39.819834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.VeAytykyZb 00:20:22.748 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.009 [2024-07-24 23:08:40.609439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.009 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.009 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.270 [2024-07-24 23:08:40.910174] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.270 [2024-07-24 23:08:40.910368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.270 23:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.530 malloc0 00:20:23.530 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.530 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:23.792 [2024-07-24 23:08:41.340861] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:23.792 [2024-07-24 23:08:41.340877] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:23.792 [2024-07-24 23:08:41.340896] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:23.792 request: 00:20:23.792 { 00:20:23.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.792 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.792 "psk": "/tmp/tmp.VeAytykyZb", 00:20:23.792 "method": "nvmf_subsystem_add_host", 00:20:23.792 "req_id": 1 00:20:23.792 } 00:20:23.792 Got JSON-RPC error response 00:20:23.792 response: 00:20:23.792 { 00:20:23.792 "code": -32603, 00:20:23.792 "message": "Internal error" 00:20:23.792 } 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 885441 ']' 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885441' 00:20:23.792 killing process with pid 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 885441 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.VeAytykyZb 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=885894 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 885894 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 885894 ']' 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.792 23:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.053 [2024-07-24 23:08:41.594041] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:24.054 [2024-07-24 23:08:41.594095] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.054 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.054 [2024-07-24 23:08:41.683850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.054 [2024-07-24 23:08:41.742428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.054 [2024-07-24 23:08:41.742466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.054 [2024-07-24 23:08:41.742472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.054 [2024-07-24 23:08:41.742476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.054 [2024-07-24 23:08:41.742480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.054 [2024-07-24 23:08:41.742497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.VeAytykyZb 00:20:24.625 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:24.885 [2024-07-24 23:08:42.537353] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.885 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.147 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.147 [2024-07-24 23:08:42.818041] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.147 [2024-07-24 23:08:42.818203] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.147 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.408 malloc0 00:20:25.408 23:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.408 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:25.669 [2024-07-24 23:08:43.248841] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=886257 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 886257 /var/tmp/bdevperf.sock 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 886257 ']' 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.669 23:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.669 [2024-07-24 23:08:43.312408] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:25.669 [2024-07-24 23:08:43.312475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886257 ] 00:20:25.669 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.669 [2024-07-24 23:08:43.373329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.669 [2024-07-24 23:08:43.425632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.609 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.609 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:26.609 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:26.609 [2024-07-24 23:08:44.221870] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.609 [2024-07-24 23:08:44.221935] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.609 TLSTESTn1 00:20:26.609 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:26.870 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:26.870 "subsystems": [ 00:20:26.870 { 00:20:26.870 "subsystem": "keyring", 00:20:26.870 "config": [] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "iobuf", 00:20:26.870 "config": [ 00:20:26.870 { 00:20:26.870 "method": "iobuf_set_options", 00:20:26.870 "params": { 00:20:26.870 "small_pool_count": 8192, 00:20:26.870 "large_pool_count": 1024, 00:20:26.870 "small_bufsize": 8192, 00:20:26.870 "large_bufsize": 135168 00:20:26.870 } 00:20:26.870 } 00:20:26.870 ] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "sock", 00:20:26.870 "config": [ 00:20:26.870 { 00:20:26.870 "method": "sock_set_default_impl", 00:20:26.870 "params": { 00:20:26.870 "impl_name": "posix" 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "sock_impl_set_options", 00:20:26.870 "params": { 00:20:26.870 "impl_name": "ssl", 00:20:26.870 "recv_buf_size": 4096, 00:20:26.870 "send_buf_size": 4096, 00:20:26.870 "enable_recv_pipe": true, 00:20:26.870 "enable_quickack": false, 00:20:26.870 "enable_placement_id": 0, 00:20:26.870 "enable_zerocopy_send_server": true, 00:20:26.870 "enable_zerocopy_send_client": false, 00:20:26.870 "zerocopy_threshold": 0, 00:20:26.870 "tls_version": 0, 00:20:26.870 "enable_ktls": false 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "sock_impl_set_options", 00:20:26.870 "params": { 00:20:26.870 "impl_name": "posix", 00:20:26.870 "recv_buf_size": 2097152, 00:20:26.870 "send_buf_size": 2097152, 00:20:26.870 "enable_recv_pipe": true, 00:20:26.870 "enable_quickack": false, 00:20:26.870 "enable_placement_id": 0, 00:20:26.870 "enable_zerocopy_send_server": true, 00:20:26.870 "enable_zerocopy_send_client": false, 00:20:26.870 "zerocopy_threshold": 0, 00:20:26.870 "tls_version": 0, 00:20:26.870 "enable_ktls": false 00:20:26.870 } 00:20:26.870 } 00:20:26.870 ] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "vmd", 00:20:26.870 "config": [] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "accel", 00:20:26.870 "config": [ 00:20:26.870 { 00:20:26.870 "method": "accel_set_options", 00:20:26.870 "params": { 00:20:26.870 "small_cache_size": 128, 00:20:26.870 "large_cache_size": 16, 00:20:26.870 "task_count": 2048, 00:20:26.870 "sequence_count": 2048, 00:20:26.870 "buf_count": 2048 00:20:26.870 } 00:20:26.870 } 00:20:26.870 ] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "bdev", 00:20:26.870 "config": [ 00:20:26.870 { 00:20:26.870 "method": "bdev_set_options", 00:20:26.870 "params": { 00:20:26.870 "bdev_io_pool_size": 65535, 00:20:26.870 "bdev_io_cache_size": 256, 00:20:26.870 "bdev_auto_examine": true, 00:20:26.870 "iobuf_small_cache_size": 128, 00:20:26.870 "iobuf_large_cache_size": 16 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_raid_set_options", 00:20:26.870 "params": { 00:20:26.870 "process_window_size_kb": 1024, 00:20:26.870 "process_max_bandwidth_mb_sec": 0 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_iscsi_set_options", 00:20:26.870 "params": { 00:20:26.870 "timeout_sec": 30 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_nvme_set_options", 00:20:26.870 "params": { 00:20:26.870 "action_on_timeout": "none", 00:20:26.870 "timeout_us": 0, 00:20:26.870 "timeout_admin_us": 0, 00:20:26.870 "keep_alive_timeout_ms": 10000, 00:20:26.870 "arbitration_burst": 0, 00:20:26.870 "low_priority_weight": 0, 00:20:26.870 "medium_priority_weight": 0, 00:20:26.870 "high_priority_weight": 0, 00:20:26.870 "nvme_adminq_poll_period_us": 10000, 00:20:26.870 "nvme_ioq_poll_period_us": 0, 00:20:26.870 "io_queue_requests": 0, 00:20:26.870 "delay_cmd_submit": true, 00:20:26.870 "transport_retry_count": 4, 00:20:26.870 "bdev_retry_count": 3, 00:20:26.870 "transport_ack_timeout": 0, 00:20:26.870 "ctrlr_loss_timeout_sec": 0, 00:20:26.870 "reconnect_delay_sec": 0, 00:20:26.870 "fast_io_fail_timeout_sec": 0, 00:20:26.870 "disable_auto_failback": false, 00:20:26.870 "generate_uuids": false, 00:20:26.870 "transport_tos": 0, 00:20:26.870 "nvme_error_stat": false, 00:20:26.870 "rdma_srq_size": 0, 00:20:26.870 "io_path_stat": false, 00:20:26.870 "allow_accel_sequence": false, 00:20:26.870 "rdma_max_cq_size": 0, 00:20:26.870 "rdma_cm_event_timeout_ms": 0, 00:20:26.870 "dhchap_digests": [ 00:20:26.870 "sha256", 00:20:26.870 "sha384", 00:20:26.870 "sha512" 00:20:26.870 ], 00:20:26.870 "dhchap_dhgroups": [ 00:20:26.870 "null", 00:20:26.870 "ffdhe2048", 00:20:26.870 "ffdhe3072", 00:20:26.870 "ffdhe4096", 00:20:26.870 "ffdhe6144", 00:20:26.870 "ffdhe8192" 00:20:26.870 ] 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_nvme_set_hotplug", 00:20:26.870 "params": { 00:20:26.870 "period_us": 100000, 00:20:26.870 "enable": false 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_malloc_create", 00:20:26.870 "params": { 00:20:26.870 "name": "malloc0", 00:20:26.870 "num_blocks": 8192, 00:20:26.870 "block_size": 4096, 00:20:26.870 "physical_block_size": 4096, 00:20:26.870 "uuid": "6b357c37-b182-4b01-812f-4c64b58ff01b", 00:20:26.870 "optimal_io_boundary": 0, 00:20:26.870 "md_size": 0, 00:20:26.870 "dif_type": 0, 00:20:26.870 "dif_is_head_of_md": false, 00:20:26.870 "dif_pi_format": 0 00:20:26.870 } 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "method": "bdev_wait_for_examine" 00:20:26.870 } 00:20:26.870 ] 00:20:26.870 }, 00:20:26.870 { 00:20:26.870 "subsystem": "nbd", 00:20:26.870 "config": [] 00:20:26.870 }, 00:20:26.870 { 00:20:26.871 "subsystem": "scheduler", 00:20:26.871 "config": [ 00:20:26.871 { 00:20:26.871 "method": "framework_set_scheduler", 00:20:26.871 "params": { 00:20:26.871 "name": "static" 00:20:26.871 } 00:20:26.871 } 00:20:26.871 ] 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "subsystem": "nvmf", 00:20:26.871 "config": [ 00:20:26.871 { 00:20:26.871 "method": "nvmf_set_config", 00:20:26.871 "params": { 00:20:26.871 "discovery_filter": "match_any", 00:20:26.871 "admin_cmd_passthru": { 00:20:26.871 "identify_ctrlr": false 00:20:26.871 } 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_set_max_subsystems", 00:20:26.871 "params": { 00:20:26.871 "max_subsystems": 1024 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_set_crdt", 00:20:26.871 "params": { 00:20:26.871 "crdt1": 0, 00:20:26.871 "crdt2": 0, 00:20:26.871 "crdt3": 0 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_create_transport", 00:20:26.871 "params": { 00:20:26.871 "trtype": "TCP", 00:20:26.871 "max_queue_depth": 128, 00:20:26.871 "max_io_qpairs_per_ctrlr": 127, 00:20:26.871 "in_capsule_data_size": 4096, 00:20:26.871 "max_io_size": 131072, 00:20:26.871 "io_unit_size": 131072, 00:20:26.871 "max_aq_depth": 128, 00:20:26.871 "num_shared_buffers": 511, 00:20:26.871 "buf_cache_size": 4294967295, 00:20:26.871 "dif_insert_or_strip": false, 00:20:26.871 "zcopy": false, 00:20:26.871 "c2h_success": false, 00:20:26.871 "sock_priority": 0, 00:20:26.871 "abort_timeout_sec": 1, 00:20:26.871 "ack_timeout": 0, 00:20:26.871 "data_wr_pool_size": 0 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_create_subsystem", 00:20:26.871 "params": { 00:20:26.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.871 "allow_any_host": false, 00:20:26.871 "serial_number": "SPDK00000000000001", 00:20:26.871 "model_number": "SPDK bdev Controller", 00:20:26.871 "max_namespaces": 10, 00:20:26.871 "min_cntlid": 1, 00:20:26.871 "max_cntlid": 65519, 00:20:26.871 "ana_reporting": false 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_subsystem_add_host", 00:20:26.871 "params": { 00:20:26.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.871 "host": "nqn.2016-06.io.spdk:host1", 00:20:26.871 "psk": "/tmp/tmp.VeAytykyZb" 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_subsystem_add_ns", 00:20:26.871 "params": { 00:20:26.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.871 "namespace": { 00:20:26.871 "nsid": 1, 00:20:26.871 "bdev_name": "malloc0", 00:20:26.871 "nguid": "6B357C37B1824B01812F4C64B58FF01B", 00:20:26.871 "uuid": "6b357c37-b182-4b01-812f-4c64b58ff01b", 00:20:26.871 "no_auto_visible": false 00:20:26.871 } 00:20:26.871 } 00:20:26.871 }, 00:20:26.871 { 00:20:26.871 "method": "nvmf_subsystem_add_listener", 00:20:26.871 "params": { 00:20:26.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.871 "listen_address": { 00:20:26.871 "trtype": "TCP", 00:20:26.871 "adrfam": "IPv4", 00:20:26.871 "traddr": "10.0.0.2", 00:20:26.871 "trsvcid": "4420" 00:20:26.871 }, 00:20:26.871 "secure_channel": true 00:20:26.871 } 00:20:26.871 } 00:20:26.871 ] 00:20:26.871 } 00:20:26.871 ] 00:20:26.871 }' 00:20:26.871 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:27.132 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:27.132 "subsystems": [ 00:20:27.132 { 00:20:27.132 "subsystem": "keyring", 00:20:27.132 "config": [] 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "subsystem": "iobuf", 00:20:27.132 "config": [ 00:20:27.132 { 00:20:27.132 "method": "iobuf_set_options", 00:20:27.132 "params": { 00:20:27.132 "small_pool_count": 8192, 00:20:27.132 "large_pool_count": 1024, 00:20:27.132 "small_bufsize": 8192, 00:20:27.132 "large_bufsize": 135168 00:20:27.132 } 00:20:27.132 } 00:20:27.132 ] 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "subsystem": "sock", 00:20:27.132 "config": [ 00:20:27.132 { 00:20:27.132 "method": "sock_set_default_impl", 00:20:27.132 "params": { 00:20:27.132 "impl_name": "posix" 00:20:27.132 } 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "method": "sock_impl_set_options", 00:20:27.132 "params": { 00:20:27.132 "impl_name": "ssl", 00:20:27.132 "recv_buf_size": 4096, 00:20:27.132 "send_buf_size": 4096, 00:20:27.132 "enable_recv_pipe": true, 00:20:27.132 "enable_quickack": false, 00:20:27.132 "enable_placement_id": 0, 00:20:27.132 "enable_zerocopy_send_server": true, 00:20:27.132 "enable_zerocopy_send_client": false, 00:20:27.132 "zerocopy_threshold": 0, 00:20:27.132 "tls_version": 0, 00:20:27.132 "enable_ktls": false 00:20:27.132 } 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "method": "sock_impl_set_options", 00:20:27.132 "params": { 00:20:27.132 "impl_name": "posix", 00:20:27.132 "recv_buf_size": 2097152, 00:20:27.132 "send_buf_size": 2097152, 00:20:27.132 "enable_recv_pipe": true, 00:20:27.132 "enable_quickack": false, 00:20:27.132 "enable_placement_id": 0, 00:20:27.132 "enable_zerocopy_send_server": true, 00:20:27.132 "enable_zerocopy_send_client": false, 00:20:27.132 "zerocopy_threshold": 0, 00:20:27.132 "tls_version": 0, 00:20:27.132 "enable_ktls": false 00:20:27.132 } 00:20:27.132 } 00:20:27.132 ] 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "subsystem": "vmd", 00:20:27.132 "config": [] 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "subsystem": "accel", 00:20:27.132 "config": [ 00:20:27.132 { 00:20:27.132 "method": "accel_set_options", 00:20:27.132 "params": { 00:20:27.132 "small_cache_size": 128, 00:20:27.132 "large_cache_size": 16, 00:20:27.132 "task_count": 2048, 00:20:27.132 "sequence_count": 2048, 00:20:27.132 "buf_count": 2048 00:20:27.132 } 00:20:27.132 } 00:20:27.132 ] 00:20:27.132 }, 00:20:27.132 { 00:20:27.132 "subsystem": "bdev", 00:20:27.132 "config": [ 00:20:27.132 { 00:20:27.132 "method": "bdev_set_options", 00:20:27.132 "params": { 00:20:27.132 "bdev_io_pool_size": 65535, 00:20:27.132 "bdev_io_cache_size": 256, 00:20:27.132 "bdev_auto_examine": true, 00:20:27.133 "iobuf_small_cache_size": 128, 00:20:27.133 "iobuf_large_cache_size": 16 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_raid_set_options", 00:20:27.133 "params": { 00:20:27.133 "process_window_size_kb": 1024, 00:20:27.133 "process_max_bandwidth_mb_sec": 0 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_iscsi_set_options", 00:20:27.133 "params": { 00:20:27.133 "timeout_sec": 30 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_nvme_set_options", 00:20:27.133 "params": { 00:20:27.133 "action_on_timeout": "none", 00:20:27.133 "timeout_us": 0, 00:20:27.133 "timeout_admin_us": 0, 00:20:27.133 "keep_alive_timeout_ms": 10000, 00:20:27.133 "arbitration_burst": 0, 00:20:27.133 "low_priority_weight": 0, 00:20:27.133 "medium_priority_weight": 0, 00:20:27.133 "high_priority_weight": 0, 00:20:27.133 "nvme_adminq_poll_period_us": 10000, 00:20:27.133 "nvme_ioq_poll_period_us": 0, 00:20:27.133 "io_queue_requests": 512, 00:20:27.133 "delay_cmd_submit": true, 00:20:27.133 "transport_retry_count": 4, 00:20:27.133 "bdev_retry_count": 3, 00:20:27.133 "transport_ack_timeout": 0, 00:20:27.133 "ctrlr_loss_timeout_sec": 0, 00:20:27.133 "reconnect_delay_sec": 0, 00:20:27.133 "fast_io_fail_timeout_sec": 0, 00:20:27.133 "disable_auto_failback": false, 00:20:27.133 "generate_uuids": false, 00:20:27.133 "transport_tos": 0, 00:20:27.133 "nvme_error_stat": false, 00:20:27.133 "rdma_srq_size": 0, 00:20:27.133 "io_path_stat": false, 00:20:27.133 "allow_accel_sequence": false, 00:20:27.133 "rdma_max_cq_size": 0, 00:20:27.133 "rdma_cm_event_timeout_ms": 0, 00:20:27.133 "dhchap_digests": [ 00:20:27.133 "sha256", 00:20:27.133 "sha384", 00:20:27.133 "sha512" 00:20:27.133 ], 00:20:27.133 "dhchap_dhgroups": [ 00:20:27.133 "null", 00:20:27.133 "ffdhe2048", 00:20:27.133 "ffdhe3072", 00:20:27.133 "ffdhe4096", 00:20:27.133 "ffdhe6144", 00:20:27.133 "ffdhe8192" 00:20:27.133 ] 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_nvme_attach_controller", 00:20:27.133 "params": { 00:20:27.133 "name": "TLSTEST", 00:20:27.133 "trtype": "TCP", 00:20:27.133 "adrfam": "IPv4", 00:20:27.133 "traddr": "10.0.0.2", 00:20:27.133 "trsvcid": "4420", 00:20:27.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.133 "prchk_reftag": false, 00:20:27.133 "prchk_guard": false, 00:20:27.133 "ctrlr_loss_timeout_sec": 0, 00:20:27.133 "reconnect_delay_sec": 0, 00:20:27.133 "fast_io_fail_timeout_sec": 0, 00:20:27.133 "psk": "/tmp/tmp.VeAytykyZb", 00:20:27.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.133 "hdgst": false, 00:20:27.133 "ddgst": false 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_nvme_set_hotplug", 00:20:27.133 "params": { 00:20:27.133 "period_us": 100000, 00:20:27.133 "enable": false 00:20:27.133 } 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "method": "bdev_wait_for_examine" 00:20:27.133 } 00:20:27.133 ] 00:20:27.133 }, 00:20:27.133 { 00:20:27.133 "subsystem": "nbd", 00:20:27.133 "config": [] 00:20:27.133 } 00:20:27.133 ] 00:20:27.133 }' 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 886257 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 886257 ']' 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 886257 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886257 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886257' 00:20:27.133 killing process with pid 886257 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 886257 00:20:27.133 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.133 00:20:27.133 Latency(us) 00:20:27.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.133 =================================================================================================================== 00:20:27.133 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.133 [2024-07-24 23:08:44.864600] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:27.133 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 886257 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 885894 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 885894 ']' 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 885894 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.394 23:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885894 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885894' 00:20:27.394 killing process with pid 885894 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 885894 00:20:27.394 [2024-07-24 23:08:45.032829] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 885894 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.394 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:27.394 "subsystems": [ 00:20:27.394 { 00:20:27.394 "subsystem": "keyring", 00:20:27.394 "config": [] 00:20:27.394 }, 00:20:27.394 { 00:20:27.394 "subsystem": "iobuf", 00:20:27.394 "config": [ 00:20:27.394 { 00:20:27.394 "method": "iobuf_set_options", 00:20:27.394 "params": { 00:20:27.394 "small_pool_count": 8192, 00:20:27.394 "large_pool_count": 1024, 00:20:27.394 "small_bufsize": 8192, 00:20:27.395 "large_bufsize": 135168 00:20:27.395 } 00:20:27.395 } 00:20:27.395 ] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "sock", 00:20:27.395 "config": [ 00:20:27.395 { 00:20:27.395 "method": "sock_set_default_impl", 00:20:27.395 "params": { 00:20:27.395 "impl_name": "posix" 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "sock_impl_set_options", 00:20:27.395 "params": { 00:20:27.395 "impl_name": "ssl", 00:20:27.395 "recv_buf_size": 4096, 00:20:27.395 "send_buf_size": 4096, 00:20:27.395 "enable_recv_pipe": true, 00:20:27.395 "enable_quickack": false, 00:20:27.395 "enable_placement_id": 0, 00:20:27.395 "enable_zerocopy_send_server": true, 00:20:27.395 "enable_zerocopy_send_client": false, 00:20:27.395 "zerocopy_threshold": 0, 00:20:27.395 "tls_version": 0, 00:20:27.395 "enable_ktls": false 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "sock_impl_set_options", 00:20:27.395 "params": { 00:20:27.395 "impl_name": "posix", 00:20:27.395 "recv_buf_size": 2097152, 00:20:27.395 "send_buf_size": 2097152, 00:20:27.395 "enable_recv_pipe": true, 00:20:27.395 "enable_quickack": false, 00:20:27.395 "enable_placement_id": 0, 00:20:27.395 "enable_zerocopy_send_server": true, 00:20:27.395 "enable_zerocopy_send_client": false, 00:20:27.395 "zerocopy_threshold": 0, 00:20:27.395 "tls_version": 0, 00:20:27.395 "enable_ktls": false 00:20:27.395 } 00:20:27.395 } 00:20:27.395 ] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "vmd", 00:20:27.395 "config": [] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "accel", 00:20:27.395 "config": [ 00:20:27.395 { 00:20:27.395 "method": "accel_set_options", 00:20:27.395 "params": { 00:20:27.395 "small_cache_size": 128, 00:20:27.395 "large_cache_size": 16, 00:20:27.395 "task_count": 2048, 00:20:27.395 "sequence_count": 2048, 00:20:27.395 "buf_count": 2048 00:20:27.395 } 00:20:27.395 } 00:20:27.395 ] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "bdev", 00:20:27.395 "config": [ 00:20:27.395 { 00:20:27.395 "method": "bdev_set_options", 00:20:27.395 "params": { 00:20:27.395 "bdev_io_pool_size": 65535, 00:20:27.395 "bdev_io_cache_size": 256, 00:20:27.395 "bdev_auto_examine": true, 00:20:27.395 "iobuf_small_cache_size": 128, 00:20:27.395 "iobuf_large_cache_size": 16 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_raid_set_options", 00:20:27.395 "params": { 00:20:27.395 "process_window_size_kb": 1024, 00:20:27.395 "process_max_bandwidth_mb_sec": 0 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_iscsi_set_options", 00:20:27.395 "params": { 00:20:27.395 "timeout_sec": 30 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_nvme_set_options", 00:20:27.395 "params": { 00:20:27.395 "action_on_timeout": "none", 00:20:27.395 "timeout_us": 0, 00:20:27.395 "timeout_admin_us": 0, 00:20:27.395 "keep_alive_timeout_ms": 10000, 00:20:27.395 "arbitration_burst": 0, 00:20:27.395 "low_priority_weight": 0, 00:20:27.395 "medium_priority_weight": 0, 00:20:27.395 "high_priority_weight": 0, 00:20:27.395 "nvme_adminq_poll_period_us": 10000, 00:20:27.395 "nvme_ioq_poll_period_us": 0, 00:20:27.395 "io_queue_requests": 0, 00:20:27.395 "delay_cmd_submit": true, 00:20:27.395 "transport_retry_count": 4, 00:20:27.395 "bdev_retry_count": 3, 00:20:27.395 "transport_ack_timeout": 0, 00:20:27.395 "ctrlr_loss_timeout_sec": 0, 00:20:27.395 "reconnect_delay_sec": 0, 00:20:27.395 "fast_io_fail_timeout_sec": 0, 00:20:27.395 "disable_auto_failback": false, 00:20:27.395 "generate_uuids": false, 00:20:27.395 "transport_tos": 0, 00:20:27.395 "nvme_error_stat": false, 00:20:27.395 "rdma_srq_size": 0, 00:20:27.395 "io_path_stat": false, 00:20:27.395 "allow_accel_sequence": false, 00:20:27.395 "rdma_max_cq_size": 0, 00:20:27.395 "rdma_cm_event_timeout_ms": 0, 00:20:27.395 "dhchap_digests": [ 00:20:27.395 "sha256", 00:20:27.395 "sha384", 00:20:27.395 "sha512" 00:20:27.395 ], 00:20:27.395 "dhchap_dhgroups": [ 00:20:27.395 "null", 00:20:27.395 "ffdhe2048", 00:20:27.395 "ffdhe3072", 00:20:27.395 "ffdhe4096", 00:20:27.395 "ffdhe6144", 00:20:27.395 "ffdhe8192" 00:20:27.395 ] 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_nvme_set_hotplug", 00:20:27.395 "params": { 00:20:27.395 "period_us": 100000, 00:20:27.395 "enable": false 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_malloc_create", 00:20:27.395 "params": { 00:20:27.395 "name": "malloc0", 00:20:27.395 "num_blocks": 8192, 00:20:27.395 "block_size": 4096, 00:20:27.395 "physical_block_size": 4096, 00:20:27.395 "uuid": "6b357c37-b182-4b01-812f-4c64b58ff01b", 00:20:27.395 "optimal_io_boundary": 0, 00:20:27.395 "md_size": 0, 00:20:27.395 "dif_type": 0, 00:20:27.395 "dif_is_head_of_md": false, 00:20:27.395 "dif_pi_format": 0 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "bdev_wait_for_examine" 00:20:27.395 } 00:20:27.395 ] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "nbd", 00:20:27.395 "config": [] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "scheduler", 00:20:27.395 "config": [ 00:20:27.395 { 00:20:27.395 "method": "framework_set_scheduler", 00:20:27.395 "params": { 00:20:27.395 "name": "static" 00:20:27.395 } 00:20:27.395 } 00:20:27.395 ] 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "subsystem": "nvmf", 00:20:27.395 "config": [ 00:20:27.395 { 00:20:27.395 "method": "nvmf_set_config", 00:20:27.395 "params": { 00:20:27.395 "discovery_filter": "match_any", 00:20:27.395 "admin_cmd_passthru": { 00:20:27.395 "identify_ctrlr": false 00:20:27.395 } 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "nvmf_set_max_subsystems", 00:20:27.395 "params": { 00:20:27.395 "max_subsystems": 1024 00:20:27.395 } 00:20:27.395 }, 00:20:27.395 { 00:20:27.395 "method": "nvmf_set_crdt", 00:20:27.396 "params": { 00:20:27.396 "crdt1": 0, 00:20:27.396 "crdt2": 0, 00:20:27.396 "crdt3": 0 00:20:27.396 } 00:20:27.396 }, 00:20:27.396 { 00:20:27.396 "method": "nvmf_create_transport", 00:20:27.396 "params": { 00:20:27.396 "trtype": "TCP", 00:20:27.396 "max_queue_depth": 128, 00:20:27.396 "max_io_qpairs_per_ctrlr": 127, 00:20:27.396 "in_capsule_data_size": 4096, 00:20:27.396 "max_io_size": 131072, 00:20:27.396 "io_unit_size": 131072, 00:20:27.396 "max_aq_depth": 128, 00:20:27.396 "num_shared_buffers": 511, 00:20:27.396 "buf_cache_size": 4294967295, 00:20:27.396 "dif_insert_or_strip": false, 00:20:27.396 "zcopy": false, 00:20:27.396 "c2h_success": false, 00:20:27.396 "sock_priority": 0, 00:20:27.396 "abort_timeout_sec": 1, 00:20:27.396 "ack_timeout": 0, 00:20:27.396 "data_wr_pool_size": 0 00:20:27.396 } 00:20:27.396 }, 00:20:27.396 { 00:20:27.396 "method": "nvmf_create_subsystem", 00:20:27.396 "params": { 00:20:27.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.396 "allow_any_host": false, 00:20:27.396 "serial_number": "SPDK00000000000001", 00:20:27.396 "model_number": "SPDK bdev Controller", 00:20:27.396 "max_namespaces": 10, 00:20:27.396 "min_cntlid": 1, 00:20:27.396 "max_cntlid": 65519, 00:20:27.396 "ana_reporting": false 00:20:27.396 } 00:20:27.396 }, 00:20:27.396 { 00:20:27.396 "method": "nvmf_subsystem_add_host", 00:20:27.396 "params": { 00:20:27.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.396 "host": "nqn.2016-06.io.spdk:host1", 00:20:27.396 "psk": "/tmp/tmp.VeAytykyZb" 00:20:27.396 } 00:20:27.396 }, 00:20:27.396 { 00:20:27.396 "method": "nvmf_subsystem_add_ns", 00:20:27.396 "params": { 00:20:27.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.396 "namespace": { 00:20:27.396 "nsid": 1, 00:20:27.396 "bdev_name": "malloc0", 00:20:27.396 "nguid": "6B357C37B1824B01812F4C64B58FF01B", 00:20:27.396 "uuid": "6b357c37-b182-4b01-812f-4c64b58ff01b", 00:20:27.396 "no_auto_visible": false 00:20:27.396 } 00:20:27.396 } 00:20:27.396 }, 00:20:27.396 { 00:20:27.396 "method": "nvmf_subsystem_add_listener", 00:20:27.396 "params": { 00:20:27.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.396 "listen_address": { 00:20:27.396 "trtype": "TCP", 00:20:27.396 "adrfam": "IPv4", 00:20:27.396 "traddr": "10.0.0.2", 00:20:27.396 "trsvcid": "4420" 00:20:27.396 }, 00:20:27.396 "secure_channel": true 00:20:27.396 } 00:20:27.396 } 00:20:27.396 ] 00:20:27.396 } 00:20:27.396 ] 00:20:27.396 }' 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=886675 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 886675 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 886675 ']' 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.396 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.657 [2024-07-24 23:08:45.212789] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:27.657 [2024-07-24 23:08:45.212842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.657 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.657 [2024-07-24 23:08:45.301659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.657 [2024-07-24 23:08:45.354921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.657 [2024-07-24 23:08:45.354955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.657 [2024-07-24 23:08:45.354961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.657 [2024-07-24 23:08:45.354966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.657 [2024-07-24 23:08:45.354970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.657 [2024-07-24 23:08:45.355019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.918 [2024-07-24 23:08:45.537787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.918 [2024-07-24 23:08:45.570228] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:27.918 [2024-07-24 23:08:45.586233] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.918 [2024-07-24 23:08:45.586431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.488 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.488 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:28.488 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.488 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.488 23:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=886858 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 886858 /var/tmp/bdevperf.sock 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 886858 ']' 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.488 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:28.488 "subsystems": [ 00:20:28.488 { 00:20:28.488 "subsystem": "keyring", 00:20:28.488 "config": [] 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "subsystem": "iobuf", 00:20:28.488 "config": [ 00:20:28.488 { 00:20:28.488 "method": "iobuf_set_options", 00:20:28.488 "params": { 00:20:28.488 "small_pool_count": 8192, 00:20:28.488 "large_pool_count": 1024, 00:20:28.488 "small_bufsize": 8192, 00:20:28.488 "large_bufsize": 135168 00:20:28.488 } 00:20:28.488 } 00:20:28.488 ] 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "subsystem": "sock", 00:20:28.488 "config": [ 00:20:28.488 { 00:20:28.488 "method": "sock_set_default_impl", 00:20:28.488 "params": { 00:20:28.488 "impl_name": "posix" 00:20:28.488 } 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "method": "sock_impl_set_options", 00:20:28.488 "params": { 00:20:28.488 "impl_name": "ssl", 00:20:28.488 "recv_buf_size": 4096, 00:20:28.488 "send_buf_size": 4096, 00:20:28.488 "enable_recv_pipe": true, 00:20:28.488 "enable_quickack": false, 00:20:28.488 "enable_placement_id": 0, 00:20:28.488 "enable_zerocopy_send_server": true, 00:20:28.488 "enable_zerocopy_send_client": false, 00:20:28.488 "zerocopy_threshold": 0, 00:20:28.488 "tls_version": 0, 00:20:28.488 "enable_ktls": false 00:20:28.488 } 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "method": "sock_impl_set_options", 00:20:28.488 "params": { 00:20:28.488 "impl_name": "posix", 00:20:28.488 "recv_buf_size": 2097152, 00:20:28.488 "send_buf_size": 2097152, 00:20:28.488 "enable_recv_pipe": true, 00:20:28.488 "enable_quickack": false, 00:20:28.488 "enable_placement_id": 0, 00:20:28.488 "enable_zerocopy_send_server": true, 00:20:28.488 "enable_zerocopy_send_client": false, 00:20:28.488 "zerocopy_threshold": 0, 00:20:28.488 "tls_version": 0, 00:20:28.488 "enable_ktls": false 00:20:28.488 } 00:20:28.488 } 00:20:28.488 ] 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "subsystem": "vmd", 00:20:28.488 "config": [] 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "subsystem": "accel", 00:20:28.488 "config": [ 00:20:28.488 { 00:20:28.488 "method": "accel_set_options", 00:20:28.488 "params": { 00:20:28.488 "small_cache_size": 128, 00:20:28.488 "large_cache_size": 16, 00:20:28.488 "task_count": 2048, 00:20:28.488 "sequence_count": 2048, 00:20:28.488 "buf_count": 2048 00:20:28.488 } 00:20:28.488 } 00:20:28.488 ] 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "subsystem": "bdev", 00:20:28.488 "config": [ 00:20:28.488 { 00:20:28.488 "method": "bdev_set_options", 00:20:28.488 "params": { 00:20:28.488 "bdev_io_pool_size": 65535, 00:20:28.488 "bdev_io_cache_size": 256, 00:20:28.488 "bdev_auto_examine": true, 00:20:28.488 "iobuf_small_cache_size": 128, 00:20:28.488 "iobuf_large_cache_size": 16 00:20:28.488 } 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "method": "bdev_raid_set_options", 00:20:28.488 "params": { 00:20:28.488 "process_window_size_kb": 1024, 00:20:28.488 "process_max_bandwidth_mb_sec": 0 00:20:28.488 } 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "method": "bdev_iscsi_set_options", 00:20:28.488 "params": { 00:20:28.488 "timeout_sec": 30 00:20:28.488 } 00:20:28.488 }, 00:20:28.488 { 00:20:28.488 "method": "bdev_nvme_set_options", 00:20:28.488 "params": { 00:20:28.488 "action_on_timeout": "none", 00:20:28.488 "timeout_us": 0, 00:20:28.488 "timeout_admin_us": 0, 00:20:28.488 "keep_alive_timeout_ms": 10000, 00:20:28.488 "arbitration_burst": 0, 00:20:28.488 "low_priority_weight": 0, 00:20:28.488 "medium_priority_weight": 0, 00:20:28.488 "high_priority_weight": 0, 00:20:28.488 "nvme_adminq_poll_period_us": 10000, 00:20:28.488 "nvme_ioq_poll_period_us": 0, 00:20:28.488 "io_queue_requests": 512, 00:20:28.488 "delay_cmd_submit": true, 00:20:28.488 "transport_retry_count": 4, 00:20:28.488 "bdev_retry_count": 3, 00:20:28.488 "transport_ack_timeout": 0, 00:20:28.488 "ctrlr_loss_timeout_sec": 0, 00:20:28.488 "reconnect_delay_sec": 0, 00:20:28.488 "fast_io_fail_timeout_sec": 0, 00:20:28.488 "disable_auto_failback": false, 00:20:28.488 "generate_uuids": false, 00:20:28.488 "transport_tos": 0, 00:20:28.488 "nvme_error_stat": false, 00:20:28.488 "rdma_srq_size": 0, 00:20:28.488 "io_path_stat": false, 00:20:28.489 "allow_accel_sequence": false, 00:20:28.489 "rdma_max_cq_size": 0, 00:20:28.489 "rdma_cm_event_timeout_ms": 0, 00:20:28.489 "dhchap_digests": [ 00:20:28.489 "sha256", 00:20:28.489 "sha384", 00:20:28.489 "sha512" 00:20:28.489 ], 00:20:28.489 "dhchap_dhgroups": [ 00:20:28.489 "null", 00:20:28.489 "ffdhe2048", 00:20:28.489 "ffdhe3072", 00:20:28.489 "ffdhe4096", 00:20:28.489 "ffdhe6144", 00:20:28.489 "ffdhe8192" 00:20:28.489 ] 00:20:28.489 } 00:20:28.489 }, 00:20:28.489 { 00:20:28.489 "method": "bdev_nvme_attach_controller", 00:20:28.489 "params": { 00:20:28.489 "name": "TLSTEST", 00:20:28.489 "trtype": "TCP", 00:20:28.489 "adrfam": "IPv4", 00:20:28.489 "traddr": "10.0.0.2", 00:20:28.489 "trsvcid": "4420", 00:20:28.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.489 "prchk_reftag": false, 00:20:28.489 "prchk_guard": false, 00:20:28.489 "ctrlr_loss_timeout_sec": 0, 00:20:28.489 "reconnect_delay_sec": 0, 00:20:28.489 "fast_io_fail_timeout_sec": 0, 00:20:28.489 "psk": "/tmp/tmp.VeAytykyZb", 00:20:28.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.489 "hdgst": false, 00:20:28.489 "ddgst": false 00:20:28.489 } 00:20:28.489 }, 00:20:28.489 { 00:20:28.489 "method": "bdev_nvme_set_hotplug", 00:20:28.489 "params": { 00:20:28.489 "period_us": 100000, 00:20:28.489 "enable": false 00:20:28.489 } 00:20:28.489 }, 00:20:28.489 { 00:20:28.489 "method": "bdev_wait_for_examine" 00:20:28.489 } 00:20:28.489 ] 00:20:28.489 }, 00:20:28.489 { 00:20:28.489 "subsystem": "nbd", 00:20:28.489 "config": [] 00:20:28.489 } 00:20:28.489 ] 00:20:28.489 }' 00:20:28.489 [2024-07-24 23:08:46.059080] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:28.489 [2024-07-24 23:08:46.059133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886858 ] 00:20:28.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.489 [2024-07-24 23:08:46.114826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.489 [2024-07-24 23:08:46.167015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.749 [2024-07-24 23:08:46.291009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.749 [2024-07-24 23:08:46.291070] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.321 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.321 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:29.321 23:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.321 Running I/O for 10 seconds... 00:20:39.324 00:20:39.324 Latency(us) 00:20:39.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.324 Verification LBA range: start 0x0 length 0x2000 00:20:39.324 TLSTESTn1 : 10.04 3602.67 14.07 0.00 0.00 35459.92 4450.99 117090.99 00:20:39.324 =================================================================================================================== 00:20:39.324 Total : 3602.67 14.07 0.00 0.00 35459.92 4450.99 117090.99 00:20:39.324 0 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 886858 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 886858 ']' 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 886858 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886858 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886858' 00:20:39.324 killing process with pid 886858 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 886858 00:20:39.324 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.324 00:20:39.324 Latency(us) 00:20:39.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.324 =================================================================================================================== 00:20:39.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.324 [2024-07-24 23:08:57.062227] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:39.324 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 886858 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 886675 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 886675 ']' 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 886675 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886675 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886675' 00:20:39.608 killing process with pid 886675 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 886675 00:20:39.608 [2024-07-24 23:08:57.231663] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 886675 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=889045 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 889045 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 889045 ']' 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.608 23:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.875 [2024-07-24 23:08:57.415241] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:39.875 [2024-07-24 23:08:57.415298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.875 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.875 [2024-07-24 23:08:57.490039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.875 [2024-07-24 23:08:57.555047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.875 [2024-07-24 23:08:57.555086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.875 [2024-07-24 23:08:57.555093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.875 [2024-07-24 23:08:57.555099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.875 [2024-07-24 23:08:57.555105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.875 [2024-07-24 23:08:57.555123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.VeAytykyZb 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.VeAytykyZb 00:20:40.446 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.706 [2024-07-24 23:08:58.374011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.706 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.967 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.967 [2024-07-24 23:08:58.706844] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.967 [2024-07-24 23:08:58.707055] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.967 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.229 malloc0 00:20:41.229 23:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.489 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VeAytykyZb 00:20:41.490 [2024-07-24 23:08:59.202774] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=889491 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 889491 /var/tmp/bdevperf.sock 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 889491 ']' 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:41.490 23:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.750 [2024-07-24 23:08:59.284082] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:41.750 [2024-07-24 23:08:59.284136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889491 ] 00:20:41.750 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.750 [2024-07-24 23:08:59.366100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.750 [2024-07-24 23:08:59.419824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.321 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:42.321 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:42.321 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VeAytykyZb 00:20:42.582 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:42.582 [2024-07-24 23:09:00.325237] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.843 nvme0n1 00:20:42.843 23:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.843 Running I/O for 1 seconds... 00:20:43.785 00:20:43.786 Latency(us) 00:20:43.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.786 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:43.786 Verification LBA range: start 0x0 length 0x2000 00:20:43.786 nvme0n1 : 1.07 2740.98 10.71 0.00 0.00 45401.92 5324.80 117964.80 00:20:43.786 =================================================================================================================== 00:20:43.786 Total : 2740.98 10.71 0.00 0.00 45401.92 5324.80 117964.80 00:20:44.047 0 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 889491 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 889491 ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 889491 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 889491 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 889491' 00:20:44.047 killing process with pid 889491 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 889491 00:20:44.047 Received shutdown signal, test time was about 1.000000 seconds 00:20:44.047 00:20:44.047 Latency(us) 00:20:44.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.047 =================================================================================================================== 00:20:44.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 889491 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 889045 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 889045 ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 889045 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 889045 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 889045' 00:20:44.047 killing process with pid 889045 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 889045 00:20:44.047 [2024-07-24 23:09:01.811857] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.047 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 889045 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=890027 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 890027 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 890027 ']' 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.308 23:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.308 [2024-07-24 23:09:02.012166] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:44.308 [2024-07-24 23:09:02.012223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.308 [2024-07-24 23:09:02.083916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.568 [2024-07-24 23:09:02.147984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.568 [2024-07-24 23:09:02.148020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.568 [2024-07-24 23:09:02.148032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.568 [2024-07-24 23:09:02.148038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.568 [2024-07-24 23:09:02.148044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.568 [2024-07-24 23:09:02.148060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.138 [2024-07-24 23:09:02.814132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.138 malloc0 00:20:45.138 [2024-07-24 23:09:02.840870] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.138 [2024-07-24 23:09:02.857082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=890380 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 890380 /var/tmp/bdevperf.sock 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 890380 ']' 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.138 23:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.398 [2024-07-24 23:09:02.938818] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:45.398 [2024-07-24 23:09:02.938880] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890380 ] 00:20:45.398 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.398 [2024-07-24 23:09:03.022148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.398 [2024-07-24 23:09:03.075824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.969 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.969 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:45.969 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VeAytykyZb 00:20:46.230 23:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:46.230 [2024-07-24 23:09:03.973199] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.490 nvme0n1 00:20:46.490 23:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.491 Running I/O for 1 seconds... 00:20:47.433 00:20:47.433 Latency(us) 00:20:47.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.433 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:47.433 Verification LBA range: start 0x0 length 0x2000 00:20:47.433 nvme0n1 : 1.05 4413.79 17.24 0.00 0.00 28420.38 4669.44 45875.20 00:20:47.433 =================================================================================================================== 00:20:47.433 Total : 4413.79 17.24 0.00 0.00 28420.38 4669.44 45875.20 00:20:47.433 0 00:20:47.433 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:47.433 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.433 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.694 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.694 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:47.694 "subsystems": [ 00:20:47.694 { 00:20:47.694 "subsystem": "keyring", 00:20:47.694 "config": [ 00:20:47.694 { 00:20:47.694 "method": "keyring_file_add_key", 00:20:47.694 "params": { 00:20:47.694 "name": "key0", 00:20:47.694 "path": "/tmp/tmp.VeAytykyZb" 00:20:47.694 } 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "iobuf", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "iobuf_set_options", 00:20:47.695 "params": { 00:20:47.695 "small_pool_count": 8192, 00:20:47.695 "large_pool_count": 1024, 00:20:47.695 "small_bufsize": 8192, 00:20:47.695 "large_bufsize": 135168 00:20:47.695 } 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "sock", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "sock_set_default_impl", 00:20:47.695 "params": { 00:20:47.695 "impl_name": "posix" 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "sock_impl_set_options", 00:20:47.695 "params": { 00:20:47.695 "impl_name": "ssl", 00:20:47.695 "recv_buf_size": 4096, 00:20:47.695 "send_buf_size": 4096, 00:20:47.695 "enable_recv_pipe": true, 00:20:47.695 "enable_quickack": false, 00:20:47.695 "enable_placement_id": 0, 00:20:47.695 "enable_zerocopy_send_server": true, 00:20:47.695 "enable_zerocopy_send_client": false, 00:20:47.695 "zerocopy_threshold": 0, 00:20:47.695 "tls_version": 0, 00:20:47.695 "enable_ktls": false 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "sock_impl_set_options", 00:20:47.695 "params": { 00:20:47.695 "impl_name": "posix", 00:20:47.695 "recv_buf_size": 2097152, 00:20:47.695 "send_buf_size": 2097152, 00:20:47.695 "enable_recv_pipe": true, 00:20:47.695 "enable_quickack": false, 00:20:47.695 "enable_placement_id": 0, 00:20:47.695 "enable_zerocopy_send_server": true, 00:20:47.695 "enable_zerocopy_send_client": false, 00:20:47.695 "zerocopy_threshold": 0, 00:20:47.695 "tls_version": 0, 00:20:47.695 "enable_ktls": false 00:20:47.695 } 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "vmd", 00:20:47.695 "config": [] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "accel", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "accel_set_options", 00:20:47.695 "params": { 00:20:47.695 "small_cache_size": 128, 00:20:47.695 "large_cache_size": 16, 00:20:47.695 "task_count": 2048, 00:20:47.695 "sequence_count": 2048, 00:20:47.695 "buf_count": 2048 00:20:47.695 } 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "bdev", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "bdev_set_options", 00:20:47.695 "params": { 00:20:47.695 "bdev_io_pool_size": 65535, 00:20:47.695 "bdev_io_cache_size": 256, 00:20:47.695 "bdev_auto_examine": true, 00:20:47.695 "iobuf_small_cache_size": 128, 00:20:47.695 "iobuf_large_cache_size": 16 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_raid_set_options", 00:20:47.695 "params": { 00:20:47.695 "process_window_size_kb": 1024, 00:20:47.695 "process_max_bandwidth_mb_sec": 0 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_iscsi_set_options", 00:20:47.695 "params": { 00:20:47.695 "timeout_sec": 30 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_nvme_set_options", 00:20:47.695 "params": { 00:20:47.695 "action_on_timeout": "none", 00:20:47.695 "timeout_us": 0, 00:20:47.695 "timeout_admin_us": 0, 00:20:47.695 "keep_alive_timeout_ms": 10000, 00:20:47.695 "arbitration_burst": 0, 00:20:47.695 "low_priority_weight": 0, 00:20:47.695 "medium_priority_weight": 0, 00:20:47.695 "high_priority_weight": 0, 00:20:47.695 "nvme_adminq_poll_period_us": 10000, 00:20:47.695 "nvme_ioq_poll_period_us": 0, 00:20:47.695 "io_queue_requests": 0, 00:20:47.695 "delay_cmd_submit": true, 00:20:47.695 "transport_retry_count": 4, 00:20:47.695 "bdev_retry_count": 3, 00:20:47.695 "transport_ack_timeout": 0, 00:20:47.695 "ctrlr_loss_timeout_sec": 0, 00:20:47.695 "reconnect_delay_sec": 0, 00:20:47.695 "fast_io_fail_timeout_sec": 0, 00:20:47.695 "disable_auto_failback": false, 00:20:47.695 "generate_uuids": false, 00:20:47.695 "transport_tos": 0, 00:20:47.695 "nvme_error_stat": false, 00:20:47.695 "rdma_srq_size": 0, 00:20:47.695 "io_path_stat": false, 00:20:47.695 "allow_accel_sequence": false, 00:20:47.695 "rdma_max_cq_size": 0, 00:20:47.695 "rdma_cm_event_timeout_ms": 0, 00:20:47.695 "dhchap_digests": [ 00:20:47.695 "sha256", 00:20:47.695 "sha384", 00:20:47.695 "sha512" 00:20:47.695 ], 00:20:47.695 "dhchap_dhgroups": [ 00:20:47.695 "null", 00:20:47.695 "ffdhe2048", 00:20:47.695 "ffdhe3072", 00:20:47.695 "ffdhe4096", 00:20:47.695 "ffdhe6144", 00:20:47.695 "ffdhe8192" 00:20:47.695 ] 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_nvme_set_hotplug", 00:20:47.695 "params": { 00:20:47.695 "period_us": 100000, 00:20:47.695 "enable": false 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_malloc_create", 00:20:47.695 "params": { 00:20:47.695 "name": "malloc0", 00:20:47.695 "num_blocks": 8192, 00:20:47.695 "block_size": 4096, 00:20:47.695 "physical_block_size": 4096, 00:20:47.695 "uuid": "06e559fd-75e1-478f-ab99-a3d63dade078", 00:20:47.695 "optimal_io_boundary": 0, 00:20:47.695 "md_size": 0, 00:20:47.695 "dif_type": 0, 00:20:47.695 "dif_is_head_of_md": false, 00:20:47.695 "dif_pi_format": 0 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "bdev_wait_for_examine" 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "nbd", 00:20:47.695 "config": [] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "scheduler", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "framework_set_scheduler", 00:20:47.695 "params": { 00:20:47.695 "name": "static" 00:20:47.695 } 00:20:47.695 } 00:20:47.695 ] 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "subsystem": "nvmf", 00:20:47.695 "config": [ 00:20:47.695 { 00:20:47.695 "method": "nvmf_set_config", 00:20:47.695 "params": { 00:20:47.695 "discovery_filter": "match_any", 00:20:47.695 "admin_cmd_passthru": { 00:20:47.695 "identify_ctrlr": false 00:20:47.695 } 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "nvmf_set_max_subsystems", 00:20:47.695 "params": { 00:20:47.695 "max_subsystems": 1024 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "nvmf_set_crdt", 00:20:47.695 "params": { 00:20:47.695 "crdt1": 0, 00:20:47.695 "crdt2": 0, 00:20:47.695 "crdt3": 0 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "nvmf_create_transport", 00:20:47.695 "params": { 00:20:47.695 "trtype": "TCP", 00:20:47.695 "max_queue_depth": 128, 00:20:47.695 "max_io_qpairs_per_ctrlr": 127, 00:20:47.695 "in_capsule_data_size": 4096, 00:20:47.695 "max_io_size": 131072, 00:20:47.695 "io_unit_size": 131072, 00:20:47.695 "max_aq_depth": 128, 00:20:47.695 "num_shared_buffers": 511, 00:20:47.695 "buf_cache_size": 4294967295, 00:20:47.695 "dif_insert_or_strip": false, 00:20:47.695 "zcopy": false, 00:20:47.695 "c2h_success": false, 00:20:47.695 "sock_priority": 0, 00:20:47.695 "abort_timeout_sec": 1, 00:20:47.695 "ack_timeout": 0, 00:20:47.695 "data_wr_pool_size": 0 00:20:47.695 } 00:20:47.695 }, 00:20:47.695 { 00:20:47.695 "method": "nvmf_create_subsystem", 00:20:47.695 "params": { 00:20:47.695 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.695 "allow_any_host": false, 00:20:47.695 "serial_number": "00000000000000000000", 00:20:47.695 "model_number": "SPDK bdev Controller", 00:20:47.695 "max_namespaces": 32, 00:20:47.695 "min_cntlid": 1, 00:20:47.695 "max_cntlid": 65519, 00:20:47.696 "ana_reporting": false 00:20:47.696 } 00:20:47.696 }, 00:20:47.696 { 00:20:47.696 "method": "nvmf_subsystem_add_host", 00:20:47.696 "params": { 00:20:47.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.696 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.696 "psk": "key0" 00:20:47.696 } 00:20:47.696 }, 00:20:47.696 { 00:20:47.696 "method": "nvmf_subsystem_add_ns", 00:20:47.696 "params": { 00:20:47.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.696 "namespace": { 00:20:47.696 "nsid": 1, 00:20:47.696 "bdev_name": "malloc0", 00:20:47.696 "nguid": "06E559FD75E1478FAB99A3D63DADE078", 00:20:47.696 "uuid": "06e559fd-75e1-478f-ab99-a3d63dade078", 00:20:47.696 "no_auto_visible": false 00:20:47.696 } 00:20:47.696 } 00:20:47.696 }, 00:20:47.696 { 00:20:47.696 "method": "nvmf_subsystem_add_listener", 00:20:47.696 "params": { 00:20:47.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.696 "listen_address": { 00:20:47.696 "trtype": "TCP", 00:20:47.696 "adrfam": "IPv4", 00:20:47.696 "traddr": "10.0.0.2", 00:20:47.696 "trsvcid": "4420" 00:20:47.696 }, 00:20:47.696 "secure_channel": false, 00:20:47.696 "sock_impl": "ssl" 00:20:47.696 } 00:20:47.696 } 00:20:47.696 ] 00:20:47.696 } 00:20:47.696 ] 00:20:47.696 }' 00:20:47.696 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:47.956 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:47.956 "subsystems": [ 00:20:47.956 { 00:20:47.956 "subsystem": "keyring", 00:20:47.956 "config": [ 00:20:47.956 { 00:20:47.956 "method": "keyring_file_add_key", 00:20:47.956 "params": { 00:20:47.956 "name": "key0", 00:20:47.956 "path": "/tmp/tmp.VeAytykyZb" 00:20:47.956 } 00:20:47.956 } 00:20:47.956 ] 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "subsystem": "iobuf", 00:20:47.956 "config": [ 00:20:47.956 { 00:20:47.956 "method": "iobuf_set_options", 00:20:47.956 "params": { 00:20:47.956 "small_pool_count": 8192, 00:20:47.956 "large_pool_count": 1024, 00:20:47.956 "small_bufsize": 8192, 00:20:47.956 "large_bufsize": 135168 00:20:47.956 } 00:20:47.956 } 00:20:47.956 ] 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "subsystem": "sock", 00:20:47.956 "config": [ 00:20:47.956 { 00:20:47.956 "method": "sock_set_default_impl", 00:20:47.956 "params": { 00:20:47.956 "impl_name": "posix" 00:20:47.956 } 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "method": "sock_impl_set_options", 00:20:47.956 "params": { 00:20:47.956 "impl_name": "ssl", 00:20:47.956 "recv_buf_size": 4096, 00:20:47.956 "send_buf_size": 4096, 00:20:47.956 "enable_recv_pipe": true, 00:20:47.956 "enable_quickack": false, 00:20:47.956 "enable_placement_id": 0, 00:20:47.956 "enable_zerocopy_send_server": true, 00:20:47.956 "enable_zerocopy_send_client": false, 00:20:47.956 "zerocopy_threshold": 0, 00:20:47.956 "tls_version": 0, 00:20:47.956 "enable_ktls": false 00:20:47.956 } 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "method": "sock_impl_set_options", 00:20:47.956 "params": { 00:20:47.956 "impl_name": "posix", 00:20:47.956 "recv_buf_size": 2097152, 00:20:47.956 "send_buf_size": 2097152, 00:20:47.956 "enable_recv_pipe": true, 00:20:47.956 "enable_quickack": false, 00:20:47.956 "enable_placement_id": 0, 00:20:47.956 "enable_zerocopy_send_server": true, 00:20:47.956 "enable_zerocopy_send_client": false, 00:20:47.956 "zerocopy_threshold": 0, 00:20:47.956 "tls_version": 0, 00:20:47.956 "enable_ktls": false 00:20:47.956 } 00:20:47.956 } 00:20:47.956 ] 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "subsystem": "vmd", 00:20:47.956 "config": [] 00:20:47.956 }, 00:20:47.956 { 00:20:47.956 "subsystem": "accel", 00:20:47.957 "config": [ 00:20:47.957 { 00:20:47.957 "method": "accel_set_options", 00:20:47.957 "params": { 00:20:47.957 "small_cache_size": 128, 00:20:47.957 "large_cache_size": 16, 00:20:47.957 "task_count": 2048, 00:20:47.957 "sequence_count": 2048, 00:20:47.957 "buf_count": 2048 00:20:47.957 } 00:20:47.957 } 00:20:47.957 ] 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "subsystem": "bdev", 00:20:47.957 "config": [ 00:20:47.957 { 00:20:47.957 "method": "bdev_set_options", 00:20:47.957 "params": { 00:20:47.957 "bdev_io_pool_size": 65535, 00:20:47.957 "bdev_io_cache_size": 256, 00:20:47.957 "bdev_auto_examine": true, 00:20:47.957 "iobuf_small_cache_size": 128, 00:20:47.957 "iobuf_large_cache_size": 16 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_raid_set_options", 00:20:47.957 "params": { 00:20:47.957 "process_window_size_kb": 1024, 00:20:47.957 "process_max_bandwidth_mb_sec": 0 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_iscsi_set_options", 00:20:47.957 "params": { 00:20:47.957 "timeout_sec": 30 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_nvme_set_options", 00:20:47.957 "params": { 00:20:47.957 "action_on_timeout": "none", 00:20:47.957 "timeout_us": 0, 00:20:47.957 "timeout_admin_us": 0, 00:20:47.957 "keep_alive_timeout_ms": 10000, 00:20:47.957 "arbitration_burst": 0, 00:20:47.957 "low_priority_weight": 0, 00:20:47.957 "medium_priority_weight": 0, 00:20:47.957 "high_priority_weight": 0, 00:20:47.957 "nvme_adminq_poll_period_us": 10000, 00:20:47.957 "nvme_ioq_poll_period_us": 0, 00:20:47.957 "io_queue_requests": 512, 00:20:47.957 "delay_cmd_submit": true, 00:20:47.957 "transport_retry_count": 4, 00:20:47.957 "bdev_retry_count": 3, 00:20:47.957 "transport_ack_timeout": 0, 00:20:47.957 "ctrlr_loss_timeout_sec": 0, 00:20:47.957 "reconnect_delay_sec": 0, 00:20:47.957 "fast_io_fail_timeout_sec": 0, 00:20:47.957 "disable_auto_failback": false, 00:20:47.957 "generate_uuids": false, 00:20:47.957 "transport_tos": 0, 00:20:47.957 "nvme_error_stat": false, 00:20:47.957 "rdma_srq_size": 0, 00:20:47.957 "io_path_stat": false, 00:20:47.957 "allow_accel_sequence": false, 00:20:47.957 "rdma_max_cq_size": 0, 00:20:47.957 "rdma_cm_event_timeout_ms": 0, 00:20:47.957 "dhchap_digests": [ 00:20:47.957 "sha256", 00:20:47.957 "sha384", 00:20:47.957 "sha512" 00:20:47.957 ], 00:20:47.957 "dhchap_dhgroups": [ 00:20:47.957 "null", 00:20:47.957 "ffdhe2048", 00:20:47.957 "ffdhe3072", 00:20:47.957 "ffdhe4096", 00:20:47.957 "ffdhe6144", 00:20:47.957 "ffdhe8192" 00:20:47.957 ] 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_nvme_attach_controller", 00:20:47.957 "params": { 00:20:47.957 "name": "nvme0", 00:20:47.957 "trtype": "TCP", 00:20:47.957 "adrfam": "IPv4", 00:20:47.957 "traddr": "10.0.0.2", 00:20:47.957 "trsvcid": "4420", 00:20:47.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.957 "prchk_reftag": false, 00:20:47.957 "prchk_guard": false, 00:20:47.957 "ctrlr_loss_timeout_sec": 0, 00:20:47.957 "reconnect_delay_sec": 0, 00:20:47.957 "fast_io_fail_timeout_sec": 0, 00:20:47.957 "psk": "key0", 00:20:47.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.957 "hdgst": false, 00:20:47.957 "ddgst": false 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_nvme_set_hotplug", 00:20:47.957 "params": { 00:20:47.957 "period_us": 100000, 00:20:47.957 "enable": false 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_enable_histogram", 00:20:47.957 "params": { 00:20:47.957 "name": "nvme0n1", 00:20:47.957 "enable": true 00:20:47.957 } 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "method": "bdev_wait_for_examine" 00:20:47.957 } 00:20:47.957 ] 00:20:47.957 }, 00:20:47.957 { 00:20:47.957 "subsystem": "nbd", 00:20:47.957 "config": [] 00:20:47.957 } 00:20:47.957 ] 00:20:47.957 }' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 890380 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 890380 ']' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 890380 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890380 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890380' 00:20:47.957 killing process with pid 890380 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 890380 00:20:47.957 Received shutdown signal, test time was about 1.000000 seconds 00:20:47.957 00:20:47.957 Latency(us) 00:20:47.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.957 =================================================================================================================== 00:20:47.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 890380 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 890027 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 890027 ']' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 890027 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.957 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890027 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890027' 00:20:48.218 killing process with pid 890027 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 890027 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 890027 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.218 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:48.218 "subsystems": [ 00:20:48.218 { 00:20:48.218 "subsystem": "keyring", 00:20:48.218 "config": [ 00:20:48.218 { 00:20:48.218 "method": "keyring_file_add_key", 00:20:48.218 "params": { 00:20:48.218 "name": "key0", 00:20:48.218 "path": "/tmp/tmp.VeAytykyZb" 00:20:48.218 } 00:20:48.218 } 00:20:48.218 ] 00:20:48.218 }, 00:20:48.218 { 00:20:48.218 "subsystem": "iobuf", 00:20:48.218 "config": [ 00:20:48.218 { 00:20:48.218 "method": "iobuf_set_options", 00:20:48.218 "params": { 00:20:48.218 "small_pool_count": 8192, 00:20:48.218 "large_pool_count": 1024, 00:20:48.218 "small_bufsize": 8192, 00:20:48.218 "large_bufsize": 135168 00:20:48.218 } 00:20:48.218 } 00:20:48.218 ] 00:20:48.218 }, 00:20:48.218 { 00:20:48.218 "subsystem": "sock", 00:20:48.218 "config": [ 00:20:48.218 { 00:20:48.218 "method": "sock_set_default_impl", 00:20:48.218 "params": { 00:20:48.218 "impl_name": "posix" 00:20:48.218 } 00:20:48.218 }, 00:20:48.218 { 00:20:48.218 "method": "sock_impl_set_options", 00:20:48.218 "params": { 00:20:48.218 "impl_name": "ssl", 00:20:48.218 "recv_buf_size": 4096, 00:20:48.218 "send_buf_size": 4096, 00:20:48.218 "enable_recv_pipe": true, 00:20:48.218 "enable_quickack": false, 00:20:48.218 "enable_placement_id": 0, 00:20:48.219 "enable_zerocopy_send_server": true, 00:20:48.219 "enable_zerocopy_send_client": false, 00:20:48.219 "zerocopy_threshold": 0, 00:20:48.219 "tls_version": 0, 00:20:48.219 "enable_ktls": false 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "sock_impl_set_options", 00:20:48.219 "params": { 00:20:48.219 "impl_name": "posix", 00:20:48.219 "recv_buf_size": 2097152, 00:20:48.219 "send_buf_size": 2097152, 00:20:48.219 "enable_recv_pipe": true, 00:20:48.219 "enable_quickack": false, 00:20:48.219 "enable_placement_id": 0, 00:20:48.219 "enable_zerocopy_send_server": true, 00:20:48.219 "enable_zerocopy_send_client": false, 00:20:48.219 "zerocopy_threshold": 0, 00:20:48.219 "tls_version": 0, 00:20:48.219 "enable_ktls": false 00:20:48.219 } 00:20:48.219 } 00:20:48.219 ] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "vmd", 00:20:48.219 "config": [] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "accel", 00:20:48.219 "config": [ 00:20:48.219 { 00:20:48.219 "method": "accel_set_options", 00:20:48.219 "params": { 00:20:48.219 "small_cache_size": 128, 00:20:48.219 "large_cache_size": 16, 00:20:48.219 "task_count": 2048, 00:20:48.219 "sequence_count": 2048, 00:20:48.219 "buf_count": 2048 00:20:48.219 } 00:20:48.219 } 00:20:48.219 ] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "bdev", 00:20:48.219 "config": [ 00:20:48.219 { 00:20:48.219 "method": "bdev_set_options", 00:20:48.219 "params": { 00:20:48.219 "bdev_io_pool_size": 65535, 00:20:48.219 "bdev_io_cache_size": 256, 00:20:48.219 "bdev_auto_examine": true, 00:20:48.219 "iobuf_small_cache_size": 128, 00:20:48.219 "iobuf_large_cache_size": 16 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_raid_set_options", 00:20:48.219 "params": { 00:20:48.219 "process_window_size_kb": 1024, 00:20:48.219 "process_max_bandwidth_mb_sec": 0 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_iscsi_set_options", 00:20:48.219 "params": { 00:20:48.219 "timeout_sec": 30 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_nvme_set_options", 00:20:48.219 "params": { 00:20:48.219 "action_on_timeout": "none", 00:20:48.219 "timeout_us": 0, 00:20:48.219 "timeout_admin_us": 0, 00:20:48.219 "keep_alive_timeout_ms": 10000, 00:20:48.219 "arbitration_burst": 0, 00:20:48.219 "low_priority_weight": 0, 00:20:48.219 "medium_priority_weight": 0, 00:20:48.219 "high_priority_weight": 0, 00:20:48.219 "nvme_adminq_poll_period_us": 10000, 00:20:48.219 "nvme_ioq_poll_period_us": 0, 00:20:48.219 "io_queue_requests": 0, 00:20:48.219 "delay_cmd_submit": true, 00:20:48.219 "transport_retry_count": 4, 00:20:48.219 "bdev_retry_count": 3, 00:20:48.219 "transport_ack_timeout": 0, 00:20:48.219 "ctrlr_loss_timeout_sec": 0, 00:20:48.219 "reconnect_delay_sec": 0, 00:20:48.219 "fast_io_fail_timeout_sec": 0, 00:20:48.219 "disable_auto_failback": false, 00:20:48.219 "generate_uuids": false, 00:20:48.219 "transport_tos": 0, 00:20:48.219 "nvme_error_stat": false, 00:20:48.219 "rdma_srq_size": 0, 00:20:48.219 "io_path_stat": false, 00:20:48.219 "allow_accel_sequence": false, 00:20:48.219 "rdma_max_cq_size": 0, 00:20:48.219 "rdma_cm_event_timeout_ms": 0, 00:20:48.219 "dhchap_digests": [ 00:20:48.219 "sha256", 00:20:48.219 "sha384", 00:20:48.219 "sha512" 00:20:48.219 ], 00:20:48.219 "dhchap_dhgroups": [ 00:20:48.219 "null", 00:20:48.219 "ffdhe2048", 00:20:48.219 "ffdhe3072", 00:20:48.219 "ffdhe4096", 00:20:48.219 "ffdhe6144", 00:20:48.219 "ffdhe8192" 00:20:48.219 ] 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_nvme_set_hotplug", 00:20:48.219 "params": { 00:20:48.219 "period_us": 100000, 00:20:48.219 "enable": false 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_malloc_create", 00:20:48.219 "params": { 00:20:48.219 "name": "malloc0", 00:20:48.219 "num_blocks": 8192, 00:20:48.219 "block_size": 4096, 00:20:48.219 "physical_block_size": 4096, 00:20:48.219 "uuid": "06e559fd-75e1-478f-ab99-a3d63dade078", 00:20:48.219 "optimal_io_boundary": 0, 00:20:48.219 "md_size": 0, 00:20:48.219 "dif_type": 0, 00:20:48.219 "dif_is_head_of_md": false, 00:20:48.219 "dif_pi_format": 0 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "bdev_wait_for_examine" 00:20:48.219 } 00:20:48.219 ] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "nbd", 00:20:48.219 "config": [] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "scheduler", 00:20:48.219 "config": [ 00:20:48.219 { 00:20:48.219 "method": "framework_set_scheduler", 00:20:48.219 "params": { 00:20:48.219 "name": "static" 00:20:48.219 } 00:20:48.219 } 00:20:48.219 ] 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "subsystem": "nvmf", 00:20:48.219 "config": [ 00:20:48.219 { 00:20:48.219 "method": "nvmf_set_config", 00:20:48.219 "params": { 00:20:48.219 "discovery_filter": "match_any", 00:20:48.219 "admin_cmd_passthru": { 00:20:48.219 "identify_ctrlr": false 00:20:48.219 } 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_set_max_subsystems", 00:20:48.219 "params": { 00:20:48.219 "max_subsystems": 1024 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_set_crdt", 00:20:48.219 "params": { 00:20:48.219 "crdt1": 0, 00:20:48.219 "crdt2": 0, 00:20:48.219 "crdt3": 0 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_create_transport", 00:20:48.219 "params": { 00:20:48.219 "trtype": "TCP", 00:20:48.219 "max_queue_depth": 128, 00:20:48.219 "max_io_qpairs_per_ctrlr": 127, 00:20:48.219 "in_capsule_data_size": 4096, 00:20:48.219 "max_io_size": 131072, 00:20:48.219 "io_unit_size": 131072, 00:20:48.219 "max_aq_depth": 128, 00:20:48.219 "num_shared_buffers": 511, 00:20:48.219 "buf_cache_size": 4294967295, 00:20:48.219 "dif_insert_or_strip": false, 00:20:48.219 "zcopy": false, 00:20:48.219 "c2h_success": false, 00:20:48.219 "sock_priority": 0, 00:20:48.219 "abort_timeout_sec": 1, 00:20:48.219 "ack_timeout": 0, 00:20:48.219 "data_wr_pool_size": 0 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_create_subsystem", 00:20:48.219 "params": { 00:20:48.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.219 "allow_any_host": false, 00:20:48.219 "serial_number": "00000000000000000000", 00:20:48.219 "model_number": "SPDK bdev Controller", 00:20:48.219 "max_namespaces": 32, 00:20:48.219 "min_cntlid": 1, 00:20:48.219 "max_cntlid": 65519, 00:20:48.219 "ana_reporting": false 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_subsystem_add_host", 00:20:48.219 "params": { 00:20:48.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.219 "host": "nqn.2016-06.io.spdk:host1", 00:20:48.219 "psk": "key0" 00:20:48.219 } 00:20:48.219 }, 00:20:48.219 { 00:20:48.219 "method": "nvmf_subsystem_add_ns", 00:20:48.219 "params": { 00:20:48.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.219 "namespace": { 00:20:48.219 "nsid": 1, 00:20:48.219 "bdev_name": "malloc0", 00:20:48.219 "nguid": "06E559FD75E1478FAB99A3D63DADE078", 00:20:48.220 "uuid": "06e559fd-75e1-478f-ab99-a3d63dade078", 00:20:48.220 "no_auto_visible": false 00:20:48.220 } 00:20:48.220 } 00:20:48.220 }, 00:20:48.220 { 00:20:48.220 "method": "nvmf_subsystem_add_listener", 00:20:48.220 "params": { 00:20:48.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.220 "listen_address": { 00:20:48.220 "trtype": "TCP", 00:20:48.220 "adrfam": "IPv4", 00:20:48.220 "traddr": "10.0.0.2", 00:20:48.220 "trsvcid": "4420" 00:20:48.220 }, 00:20:48.220 "secure_channel": false, 00:20:48.220 "sock_impl": "ssl" 00:20:48.220 } 00:20:48.220 } 00:20:48.220 ] 00:20:48.220 } 00:20:48.220 ] 00:20:48.220 }' 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=890888 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 890888 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 890888 ']' 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.220 23:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.220 [2024-07-24 23:09:05.971552] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:48.220 [2024-07-24 23:09:05.971610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.481 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.481 [2024-07-24 23:09:06.043921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.481 [2024-07-24 23:09:06.108429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.481 [2024-07-24 23:09:06.108466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.481 [2024-07-24 23:09:06.108474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.481 [2024-07-24 23:09:06.108480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.481 [2024-07-24 23:09:06.108485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.481 [2024-07-24 23:09:06.108541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.741 [2024-07-24 23:09:06.305973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.741 [2024-07-24 23:09:06.348045] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.741 [2024-07-24 23:09:06.348268] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=891094 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 891094 /var/tmp/bdevperf.sock 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 891094 ']' 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.002 23:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:49.002 "subsystems": [ 00:20:49.002 { 00:20:49.002 "subsystem": "keyring", 00:20:49.002 "config": [ 00:20:49.002 { 00:20:49.002 "method": "keyring_file_add_key", 00:20:49.002 "params": { 00:20:49.002 "name": "key0", 00:20:49.002 "path": "/tmp/tmp.VeAytykyZb" 00:20:49.002 } 00:20:49.002 } 00:20:49.002 ] 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "subsystem": "iobuf", 00:20:49.002 "config": [ 00:20:49.002 { 00:20:49.002 "method": "iobuf_set_options", 00:20:49.002 "params": { 00:20:49.002 "small_pool_count": 8192, 00:20:49.002 "large_pool_count": 1024, 00:20:49.002 "small_bufsize": 8192, 00:20:49.002 "large_bufsize": 135168 00:20:49.002 } 00:20:49.002 } 00:20:49.002 ] 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "subsystem": "sock", 00:20:49.002 "config": [ 00:20:49.002 { 00:20:49.002 "method": "sock_set_default_impl", 00:20:49.002 "params": { 00:20:49.002 "impl_name": "posix" 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "sock_impl_set_options", 00:20:49.002 "params": { 00:20:49.002 "impl_name": "ssl", 00:20:49.002 "recv_buf_size": 4096, 00:20:49.002 "send_buf_size": 4096, 00:20:49.002 "enable_recv_pipe": true, 00:20:49.002 "enable_quickack": false, 00:20:49.002 "enable_placement_id": 0, 00:20:49.002 "enable_zerocopy_send_server": true, 00:20:49.002 "enable_zerocopy_send_client": false, 00:20:49.002 "zerocopy_threshold": 0, 00:20:49.002 "tls_version": 0, 00:20:49.002 "enable_ktls": false 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "sock_impl_set_options", 00:20:49.002 "params": { 00:20:49.002 "impl_name": "posix", 00:20:49.002 "recv_buf_size": 2097152, 00:20:49.002 "send_buf_size": 2097152, 00:20:49.002 "enable_recv_pipe": true, 00:20:49.002 "enable_quickack": false, 00:20:49.002 "enable_placement_id": 0, 00:20:49.002 "enable_zerocopy_send_server": true, 00:20:49.002 "enable_zerocopy_send_client": false, 00:20:49.002 "zerocopy_threshold": 0, 00:20:49.002 "tls_version": 0, 00:20:49.002 "enable_ktls": false 00:20:49.002 } 00:20:49.002 } 00:20:49.002 ] 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "subsystem": "vmd", 00:20:49.002 "config": [] 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "subsystem": "accel", 00:20:49.002 "config": [ 00:20:49.002 { 00:20:49.002 "method": "accel_set_options", 00:20:49.002 "params": { 00:20:49.002 "small_cache_size": 128, 00:20:49.002 "large_cache_size": 16, 00:20:49.002 "task_count": 2048, 00:20:49.002 "sequence_count": 2048, 00:20:49.002 "buf_count": 2048 00:20:49.002 } 00:20:49.002 } 00:20:49.002 ] 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "subsystem": "bdev", 00:20:49.002 "config": [ 00:20:49.002 { 00:20:49.002 "method": "bdev_set_options", 00:20:49.002 "params": { 00:20:49.002 "bdev_io_pool_size": 65535, 00:20:49.002 "bdev_io_cache_size": 256, 00:20:49.002 "bdev_auto_examine": true, 00:20:49.002 "iobuf_small_cache_size": 128, 00:20:49.002 "iobuf_large_cache_size": 16 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "bdev_raid_set_options", 00:20:49.002 "params": { 00:20:49.002 "process_window_size_kb": 1024, 00:20:49.002 "process_max_bandwidth_mb_sec": 0 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "bdev_iscsi_set_options", 00:20:49.002 "params": { 00:20:49.002 "timeout_sec": 30 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "bdev_nvme_set_options", 00:20:49.002 "params": { 00:20:49.002 "action_on_timeout": "none", 00:20:49.002 "timeout_us": 0, 00:20:49.002 "timeout_admin_us": 0, 00:20:49.002 "keep_alive_timeout_ms": 10000, 00:20:49.002 "arbitration_burst": 0, 00:20:49.002 "low_priority_weight": 0, 00:20:49.002 "medium_priority_weight": 0, 00:20:49.002 "high_priority_weight": 0, 00:20:49.002 "nvme_adminq_poll_period_us": 10000, 00:20:49.002 "nvme_ioq_poll_period_us": 0, 00:20:49.002 "io_queue_requests": 512, 00:20:49.002 "delay_cmd_submit": true, 00:20:49.002 "transport_retry_count": 4, 00:20:49.002 "bdev_retry_count": 3, 00:20:49.002 "transport_ack_timeout": 0, 00:20:49.002 "ctrlr_loss_timeout_sec": 0, 00:20:49.002 "reconnect_delay_sec": 0, 00:20:49.002 "fast_io_fail_timeout_sec": 0, 00:20:49.002 "disable_auto_failback": false, 00:20:49.002 "generate_uuids": false, 00:20:49.002 "transport_tos": 0, 00:20:49.002 "nvme_error_stat": false, 00:20:49.002 "rdma_srq_size": 0, 00:20:49.002 "io_path_stat": false, 00:20:49.002 "allow_accel_sequence": false, 00:20:49.002 "rdma_max_cq_size": 0, 00:20:49.002 "rdma_cm_event_timeout_ms": 0, 00:20:49.002 "dhchap_digests": [ 00:20:49.002 "sha256", 00:20:49.002 "sha384", 00:20:49.002 "sha512" 00:20:49.002 ], 00:20:49.002 "dhchap_dhgroups": [ 00:20:49.002 "null", 00:20:49.002 "ffdhe2048", 00:20:49.002 "ffdhe3072", 00:20:49.002 "ffdhe4096", 00:20:49.002 "ffdhe6144", 00:20:49.002 "ffdhe8192" 00:20:49.002 ] 00:20:49.002 } 00:20:49.002 }, 00:20:49.002 { 00:20:49.002 "method": "bdev_nvme_attach_controller", 00:20:49.002 "params": { 00:20:49.002 "name": "nvme0", 00:20:49.003 "trtype": "TCP", 00:20:49.003 "adrfam": "IPv4", 00:20:49.003 "traddr": "10.0.0.2", 00:20:49.003 "trsvcid": "4420", 00:20:49.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.003 "prchk_reftag": false, 00:20:49.003 "prchk_guard": false, 00:20:49.003 "ctrlr_loss_timeout_sec": 0, 00:20:49.003 "reconnect_delay_sec": 0, 00:20:49.003 "fast_io_fail_timeout_sec": 0, 00:20:49.003 "psk": "key0", 00:20:49.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.003 "hdgst": false, 00:20:49.003 "ddgst": false 00:20:49.003 } 00:20:49.003 }, 00:20:49.003 { 00:20:49.003 "method": "bdev_nvme_set_hotplug", 00:20:49.003 "params": { 00:20:49.003 "period_us": 100000, 00:20:49.003 "enable": false 00:20:49.003 } 00:20:49.003 }, 00:20:49.003 { 00:20:49.003 "method": "bdev_enable_histogram", 00:20:49.003 "params": { 00:20:49.003 "name": "nvme0n1", 00:20:49.003 "enable": true 00:20:49.003 } 00:20:49.003 }, 00:20:49.003 { 00:20:49.003 "method": "bdev_wait_for_examine" 00:20:49.003 } 00:20:49.003 ] 00:20:49.003 }, 00:20:49.003 { 00:20:49.003 "subsystem": "nbd", 00:20:49.003 "config": [] 00:20:49.003 } 00:20:49.003 ] 00:20:49.003 }' 00:20:49.263 [2024-07-24 23:09:06.815817] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:20:49.263 [2024-07-24 23:09:06.815867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891094 ] 00:20:49.263 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.263 [2024-07-24 23:09:06.895421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.264 [2024-07-24 23:09:06.949101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.524 [2024-07-24 23:09:07.082009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.096 23:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.096 Running I/O for 1 seconds... 00:20:51.478 00:20:51.478 Latency(us) 00:20:51.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.478 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.478 Verification LBA range: start 0x0 length 0x2000 00:20:51.478 nvme0n1 : 1.07 2547.39 9.95 0.00 0.00 48848.36 5707.09 63351.47 00:20:51.478 =================================================================================================================== 00:20:51.478 Total : 2547.39 9.95 0.00 0.00 48848.36 5707.09 63351.47 00:20:51.478 0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:51.478 nvmf_trace.0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 891094 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 891094 ']' 00:20:51.478 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 891094 00:20:51.479 23:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 891094 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 891094' 00:20:51.479 killing process with pid 891094 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 891094 00:20:51.479 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.479 00:20:51.479 Latency(us) 00:20:51.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.479 =================================================================================================================== 00:20:51.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 891094 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.479 rmmod nvme_tcp 00:20:51.479 rmmod nvme_fabrics 00:20:51.479 rmmod nvme_keyring 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 890888 ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 890888 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 890888 ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 890888 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.479 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890888 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890888' 00:20:51.739 killing process with pid 890888 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 890888 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 890888 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.739 23:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mCpMqaNlEw /tmp/tmp.2siUzUhHLu /tmp/tmp.VeAytykyZb 00:20:54.283 00:20:54.283 real 1m25.299s 00:20:54.283 user 2m9.060s 00:20:54.283 sys 0m28.924s 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.283 ************************************ 00:20:54.283 END TEST nvmf_tls 00:20:54.283 ************************************ 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:54.283 ************************************ 00:20:54.283 START TEST nvmf_fips 00:20:54.283 ************************************ 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:54.283 * Looking for test storage... 00:20:54.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:54.283 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:54.284 Error setting digest 00:20:54.284 00E2D366D17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:54.284 00E2D366D17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.284 23:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:02.424 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:02.424 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:02.424 Found net devices under 0000:31:00.0: cvl_0_0 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:02.424 Found net devices under 0000:31:00.1: cvl_0_1 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.424 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:21:02.425 00:21:02.425 --- 10.0.0.2 ping statistics --- 00:21:02.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.425 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:21:02.425 00:21:02.425 --- 10.0.0.1 ping statistics --- 00:21:02.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.425 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=896642 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 896642 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 896642 ']' 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.425 23:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:02.425 [2024-07-24 23:09:19.926638] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:02.425 [2024-07-24 23:09:19.926706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.425 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.425 [2024-07-24 23:09:20.022956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.425 [2024-07-24 23:09:20.126556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.425 [2024-07-24 23:09:20.126616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.425 [2024-07-24 23:09:20.126625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.425 [2024-07-24 23:09:20.126632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.425 [2024-07-24 23:09:20.126638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.425 [2024-07-24 23:09:20.126664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.064 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:03.326 [2024-07-24 23:09:20.891760] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.326 [2024-07-24 23:09:20.907761] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.326 [2024-07-24 23:09:20.908068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.326 [2024-07-24 23:09:20.937891] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:03.326 malloc0 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=896949 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 896949 /var/tmp/bdevperf.sock 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 896949 ']' 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.326 23:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.326 [2024-07-24 23:09:21.041673] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:03.326 [2024-07-24 23:09:21.041743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896949 ] 00:21:03.326 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.326 [2024-07-24 23:09:21.103544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.586 [2024-07-24 23:09:21.167402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.157 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.157 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:04.157 23:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.157 [2024-07-24 23:09:21.918658] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.157 [2024-07-24 23:09:21.918718] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:04.417 TLSTESTn1 00:21:04.417 23:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.417 Running I/O for 10 seconds... 00:21:14.413 00:21:14.413 Latency(us) 00:21:14.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.413 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:14.413 Verification LBA range: start 0x0 length 0x2000 00:21:14.413 TLSTESTn1 : 10.02 3812.34 14.89 0.00 0.00 33525.44 4805.97 122333.87 00:21:14.413 =================================================================================================================== 00:21:14.413 Total : 3812.34 14.89 0.00 0.00 33525.44 4805.97 122333.87 00:21:14.413 0 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:14.413 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:14.413 nvmf_trace.0 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 896949 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 896949 ']' 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 896949 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896949 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896949' 00:21:14.673 killing process with pid 896949 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 896949 00:21:14.673 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.673 00:21:14.673 Latency(us) 00:21:14.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.673 =================================================================================================================== 00:21:14.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.673 [2024-07-24 23:09:32.334195] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 896949 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.673 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.673 rmmod nvme_tcp 00:21:14.934 rmmod nvme_fabrics 00:21:14.934 rmmod nvme_keyring 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 896642 ']' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 896642 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 896642 ']' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 896642 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896642 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896642' 00:21:14.934 killing process with pid 896642 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 896642 00:21:14.934 [2024-07-24 23:09:32.576960] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 896642 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.934 23:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:17.479 00:21:17.479 real 0m23.205s 00:21:17.479 user 0m23.459s 00:21:17.479 sys 0m10.344s 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:17.479 ************************************ 00:21:17.479 END TEST nvmf_fips 00:21:17.479 ************************************ 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.479 23:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:25.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:25.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.626 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:25.627 Found net devices under 0000:31:00.0: cvl_0_0 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:25.627 Found net devices under 0000:31:00.1: cvl_0_1 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.627 ************************************ 00:21:25.627 START TEST nvmf_perf_adq 00:21:25.627 ************************************ 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:25.627 * Looking for test storage... 00:21:25.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.627 23:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:33.772 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:33.772 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.772 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:33.773 Found net devices under 0000:31:00.0: cvl_0_0 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:33.773 Found net devices under 0000:31:00.1: cvl_0_1 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:33.773 23:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:34.714 23:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:36.630 23:09:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.948 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:41.949 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:41.949 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:41.949 Found net devices under 0000:31:00.0: cvl_0_0 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:41.949 Found net devices under 0000:31:00.1: cvl_0_1 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:21:41.949 00:21:41.949 --- 10.0.0.2 ping statistics --- 00:21:41.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.949 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:21:41.949 00:21:41.949 --- 10.0.0.1 ping statistics --- 00:21:41.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.949 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=909858 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 909858 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 909858 ']' 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.949 23:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.949 [2024-07-24 23:09:59.557759] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:21:41.949 [2024-07-24 23:09:59.557824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.949 [2024-07-24 23:09:59.638483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.949 [2024-07-24 23:09:59.713935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.949 [2024-07-24 23:09:59.713972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.949 [2024-07-24 23:09:59.713980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.949 [2024-07-24 23:09:59.713987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.949 [2024-07-24 23:09:59.713993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.949 [2024-07-24 23:09:59.714130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.949 [2024-07-24 23:09:59.714249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.949 [2024-07-24 23:09:59.714405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.949 [2024-07-24 23:09:59.714406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.893 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 [2024-07-24 23:10:00.512075] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 Malloc1 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 [2024-07-24 23:10:00.571415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=910144 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:42.894 23:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:42.894 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.806 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:44.806 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.806 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:45.067 "tick_rate": 2400000000, 00:21:45.067 "poll_groups": [ 00:21:45.067 { 00:21:45.067 "name": "nvmf_tgt_poll_group_000", 00:21:45.067 "admin_qpairs": 1, 00:21:45.067 "io_qpairs": 1, 00:21:45.067 "current_admin_qpairs": 1, 00:21:45.067 "current_io_qpairs": 1, 00:21:45.067 "pending_bdev_io": 0, 00:21:45.067 "completed_nvme_io": 20777, 00:21:45.067 "transports": [ 00:21:45.067 { 00:21:45.067 "trtype": "TCP" 00:21:45.067 } 00:21:45.067 ] 00:21:45.067 }, 00:21:45.067 { 00:21:45.067 "name": "nvmf_tgt_poll_group_001", 00:21:45.067 "admin_qpairs": 0, 00:21:45.067 "io_qpairs": 1, 00:21:45.067 "current_admin_qpairs": 0, 00:21:45.067 "current_io_qpairs": 1, 00:21:45.067 "pending_bdev_io": 0, 00:21:45.067 "completed_nvme_io": 28964, 00:21:45.067 "transports": [ 00:21:45.067 { 00:21:45.067 "trtype": "TCP" 00:21:45.067 } 00:21:45.067 ] 00:21:45.067 }, 00:21:45.067 { 00:21:45.067 "name": "nvmf_tgt_poll_group_002", 00:21:45.067 "admin_qpairs": 0, 00:21:45.067 "io_qpairs": 1, 00:21:45.067 "current_admin_qpairs": 0, 00:21:45.067 "current_io_qpairs": 1, 00:21:45.067 "pending_bdev_io": 0, 00:21:45.067 "completed_nvme_io": 19499, 00:21:45.067 "transports": [ 00:21:45.067 { 00:21:45.067 "trtype": "TCP" 00:21:45.067 } 00:21:45.067 ] 00:21:45.067 }, 00:21:45.067 { 00:21:45.067 "name": "nvmf_tgt_poll_group_003", 00:21:45.067 "admin_qpairs": 0, 00:21:45.067 "io_qpairs": 1, 00:21:45.067 "current_admin_qpairs": 0, 00:21:45.067 "current_io_qpairs": 1, 00:21:45.067 "pending_bdev_io": 0, 00:21:45.067 "completed_nvme_io": 20053, 00:21:45.067 "transports": [ 00:21:45.067 { 00:21:45.067 "trtype": "TCP" 00:21:45.067 } 00:21:45.067 ] 00:21:45.067 } 00:21:45.067 ] 00:21:45.067 }' 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:45.067 23:10:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 910144 00:21:53.201 Initializing NVMe Controllers 00:21:53.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:53.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:53.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:53.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:53.201 Initialization complete. Launching workers. 00:21:53.201 ======================================================== 00:21:53.201 Latency(us) 00:21:53.201 Device Information : IOPS MiB/s Average min max 00:21:53.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11238.10 43.90 5695.09 1365.35 9432.95 00:21:53.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14665.00 57.29 4363.60 1144.82 10455.59 00:21:53.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13671.80 53.41 4684.50 1106.08 42217.25 00:21:53.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14831.60 57.94 4315.04 1219.89 9712.89 00:21:53.201 ======================================================== 00:21:53.201 Total : 54406.49 212.53 4706.03 1106.08 42217.25 00:21:53.201 00:21:53.201 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:53.201 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.202 rmmod nvme_tcp 00:21:53.202 rmmod nvme_fabrics 00:21:53.202 rmmod nvme_keyring 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 909858 ']' 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 909858 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 909858 ']' 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 909858 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 909858 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 909858' 00:21:53.202 killing process with pid 909858 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 909858 00:21:53.202 23:10:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 909858 00:21:53.462 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.462 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.462 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.463 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.463 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.463 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.463 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.463 23:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:55.377 23:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:57.292 23:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:59.204 23:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:04.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.494 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:04.495 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:04.495 Found net devices under 0000:31:00.0: cvl_0_0 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:04.495 Found net devices under 0000:31:00.1: cvl_0_1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:22:04.495 00:22:04.495 --- 10.0.0.2 ping statistics --- 00:22:04.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.495 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:22:04.495 00:22:04.495 --- 10.0.0.1 ping statistics --- 00:22:04.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.495 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:04.495 net.core.busy_poll = 1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:04.495 net.core.busy_read = 1 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:04.495 23:10:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=914682 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 914682 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 914682 ']' 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.495 23:10:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.495 [2024-07-24 23:10:22.245387] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:04.495 [2024-07-24 23:10:22.245463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.756 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.756 [2024-07-24 23:10:22.322068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.756 [2024-07-24 23:10:22.394304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.756 [2024-07-24 23:10:22.394344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.756 [2024-07-24 23:10:22.394351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.756 [2024-07-24 23:10:22.394358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.756 [2024-07-24 23:10:22.394363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.756 [2024-07-24 23:10:22.394503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.756 [2024-07-24 23:10:22.394624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.756 [2024-07-24 23:10:22.394801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.756 [2024-07-24 23:10:22.394801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.328 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 [2024-07-24 23:10:23.186929] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 Malloc1 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 [2024-07-24 23:10:23.246243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=914946 00:22:05.588 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:05.589 23:10:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.589 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.500 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:07.500 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.500 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.500 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.500 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:07.500 "tick_rate": 2400000000, 00:22:07.500 "poll_groups": [ 00:22:07.500 { 00:22:07.500 "name": "nvmf_tgt_poll_group_000", 00:22:07.500 "admin_qpairs": 1, 00:22:07.500 "io_qpairs": 3, 00:22:07.500 "current_admin_qpairs": 1, 00:22:07.500 "current_io_qpairs": 3, 00:22:07.500 "pending_bdev_io": 0, 00:22:07.500 "completed_nvme_io": 30904, 00:22:07.500 "transports": [ 00:22:07.500 { 00:22:07.500 "trtype": "TCP" 00:22:07.500 } 00:22:07.500 ] 00:22:07.500 }, 00:22:07.500 { 00:22:07.500 "name": "nvmf_tgt_poll_group_001", 00:22:07.500 "admin_qpairs": 0, 00:22:07.500 "io_qpairs": 1, 00:22:07.500 "current_admin_qpairs": 0, 00:22:07.500 "current_io_qpairs": 1, 00:22:07.500 "pending_bdev_io": 0, 00:22:07.500 "completed_nvme_io": 36444, 00:22:07.500 "transports": [ 00:22:07.500 { 00:22:07.500 "trtype": "TCP" 00:22:07.500 } 00:22:07.500 ] 00:22:07.500 }, 00:22:07.500 { 00:22:07.500 "name": "nvmf_tgt_poll_group_002", 00:22:07.500 "admin_qpairs": 0, 00:22:07.500 "io_qpairs": 0, 00:22:07.500 "current_admin_qpairs": 0, 00:22:07.500 "current_io_qpairs": 0, 00:22:07.500 "pending_bdev_io": 0, 00:22:07.500 "completed_nvme_io": 0, 00:22:07.500 "transports": [ 00:22:07.500 { 00:22:07.500 "trtype": "TCP" 00:22:07.500 } 00:22:07.500 ] 00:22:07.500 }, 00:22:07.500 { 00:22:07.500 "name": "nvmf_tgt_poll_group_003", 00:22:07.500 "admin_qpairs": 0, 00:22:07.500 "io_qpairs": 0, 00:22:07.500 "current_admin_qpairs": 0, 00:22:07.500 "current_io_qpairs": 0, 00:22:07.500 "pending_bdev_io": 0, 00:22:07.500 "completed_nvme_io": 0, 00:22:07.500 "transports": [ 00:22:07.500 { 00:22:07.500 "trtype": "TCP" 00:22:07.500 } 00:22:07.500 ] 00:22:07.500 } 00:22:07.500 ] 00:22:07.500 }' 00:22:07.760 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:07.760 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:07.760 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:07.760 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:07.760 23:10:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 914946 00:22:15.937 Initializing NVMe Controllers 00:22:15.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:15.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:15.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:15.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:15.937 Initialization complete. Launching workers. 00:22:15.937 ======================================================== 00:22:15.937 Latency(us) 00:22:15.937 Device Information : IOPS MiB/s Average min max 00:22:15.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6035.70 23.58 10604.84 1616.29 59888.42 00:22:15.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 18762.40 73.29 3411.56 1253.70 7217.39 00:22:15.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5885.60 22.99 10874.71 1841.91 55145.80 00:22:15.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8924.90 34.86 7170.26 1075.19 58829.90 00:22:15.937 ======================================================== 00:22:15.937 Total : 39608.60 154.72 6463.62 1075.19 59888.42 00:22:15.937 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.937 rmmod nvme_tcp 00:22:15.937 rmmod nvme_fabrics 00:22:15.937 rmmod nvme_keyring 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 914682 ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 914682 ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 914682' 00:22:15.937 killing process with pid 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 914682 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.937 23:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.239 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.239 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:19.239 00:22:19.239 real 0m54.029s 00:22:19.239 user 2m50.079s 00:22:19.239 sys 0m11.170s 00:22:19.239 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.239 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.239 ************************************ 00:22:19.240 END TEST nvmf_perf_adq 00:22:19.240 ************************************ 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.240 ************************************ 00:22:19.240 START TEST nvmf_shutdown 00:22:19.240 ************************************ 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:19.240 * Looking for test storage... 00:22:19.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.240 23:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.502 ************************************ 00:22:19.502 START TEST nvmf_shutdown_tc1 00:22:19.502 ************************************ 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.502 23:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.642 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:27.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:27.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.643 23:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:27.643 Found net devices under 0000:31:00.0: cvl_0_0 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:27.643 Found net devices under 0000:31:00.1: cvl_0_1 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:22:27.643 00:22:27.643 --- 10.0.0.2 ping statistics --- 00:22:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.643 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:22:27.643 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:27.643 00:22:27.643 --- 10.0.0.1 ping statistics --- 00:22:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.643 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=921862 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 921862 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 921862 ']' 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.644 23:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.644 [2024-07-24 23:10:45.414219] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:27.644 [2024-07-24 23:10:45.414272] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.905 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.905 [2024-07-24 23:10:45.511047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.905 [2024-07-24 23:10:45.605945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.905 [2024-07-24 23:10:45.605999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.905 [2024-07-24 23:10:45.606007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.905 [2024-07-24 23:10:45.606014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.905 [2024-07-24 23:10:45.606020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.905 [2024-07-24 23:10:45.606152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.905 [2024-07-24 23:10:45.606188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.905 [2024-07-24 23:10:45.606331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:27.905 [2024-07-24 23:10:45.606332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.476 [2024-07-24 23:10:46.245609] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.476 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.737 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.737 Malloc1 00:22:28.737 [2024-07-24 23:10:46.349129] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.737 Malloc2 00:22:28.737 Malloc3 00:22:28.737 Malloc4 00:22:28.737 Malloc5 00:22:28.737 Malloc6 00:22:28.999 Malloc7 00:22:28.999 Malloc8 00:22:28.999 Malloc9 00:22:28.999 Malloc10 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=922251 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 922251 /var/tmp/bdevperf.sock 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 922251 ']' 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:29.000 "name": "Nvme$subsystem", 00:22:29.000 "trtype": "$TEST_TRANSPORT", 00:22:29.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "$NVMF_PORT", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.000 "hdgst": ${hdgst:-false}, 00:22:29.000 "ddgst": ${ddgst:-false} 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 } 00:22:29.000 EOF 00:22:29.000 )") 00:22:29.000 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.261 "name": "Nvme$subsystem", 00:22:29.261 "trtype": "$TEST_TRANSPORT", 00:22:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.261 "adrfam": "ipv4", 00:22:29.261 "trsvcid": "$NVMF_PORT", 00:22:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.261 "hdgst": ${hdgst:-false}, 00:22:29.261 "ddgst": ${ddgst:-false} 00:22:29.261 }, 00:22:29.261 "method": "bdev_nvme_attach_controller" 00:22:29.261 } 00:22:29.261 EOF 00:22:29.261 )") 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.261 "name": "Nvme$subsystem", 00:22:29.261 "trtype": "$TEST_TRANSPORT", 00:22:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.261 "adrfam": "ipv4", 00:22:29.261 "trsvcid": "$NVMF_PORT", 00:22:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.261 "hdgst": ${hdgst:-false}, 00:22:29.261 "ddgst": ${ddgst:-false} 00:22:29.261 }, 00:22:29.261 "method": "bdev_nvme_attach_controller" 00:22:29.261 } 00:22:29.261 EOF 00:22:29.261 )") 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 [2024-07-24 23:10:46.801546] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:29.261 [2024-07-24 23:10:46.801599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.261 "name": "Nvme$subsystem", 00:22:29.261 "trtype": "$TEST_TRANSPORT", 00:22:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.261 "adrfam": "ipv4", 00:22:29.261 "trsvcid": "$NVMF_PORT", 00:22:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.261 "hdgst": ${hdgst:-false}, 00:22:29.261 "ddgst": ${ddgst:-false} 00:22:29.261 }, 00:22:29.261 "method": "bdev_nvme_attach_controller" 00:22:29.261 } 00:22:29.261 EOF 00:22:29.261 )") 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.261 "name": "Nvme$subsystem", 00:22:29.261 "trtype": "$TEST_TRANSPORT", 00:22:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.261 "adrfam": "ipv4", 00:22:29.261 "trsvcid": "$NVMF_PORT", 00:22:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.261 "hdgst": ${hdgst:-false}, 00:22:29.261 "ddgst": ${ddgst:-false} 00:22:29.261 }, 00:22:29.261 "method": "bdev_nvme_attach_controller" 00:22:29.261 } 00:22:29.261 EOF 00:22:29.261 )") 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.261 "name": "Nvme$subsystem", 00:22:29.261 "trtype": "$TEST_TRANSPORT", 00:22:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.261 "adrfam": "ipv4", 00:22:29.261 "trsvcid": "$NVMF_PORT", 00:22:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.261 "hdgst": ${hdgst:-false}, 00:22:29.261 "ddgst": ${ddgst:-false} 00:22:29.261 }, 00:22:29.261 "method": "bdev_nvme_attach_controller" 00:22:29.261 } 00:22:29.261 EOF 00:22:29.261 )") 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.261 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.261 { 00:22:29.261 "params": { 00:22:29.262 "name": "Nvme$subsystem", 00:22:29.262 "trtype": "$TEST_TRANSPORT", 00:22:29.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "$NVMF_PORT", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.262 "hdgst": ${hdgst:-false}, 00:22:29.262 "ddgst": ${ddgst:-false} 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 } 00:22:29.262 EOF 00:22:29.262 )") 00:22:29.262 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:29.262 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.262 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:29.262 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:29.262 23:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme1", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme2", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme3", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme4", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme5", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme6", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme7", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme8", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme9", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 },{ 00:22:29.262 "params": { 00:22:29.262 "name": "Nvme10", 00:22:29.262 "trtype": "tcp", 00:22:29.262 "traddr": "10.0.0.2", 00:22:29.262 "adrfam": "ipv4", 00:22:29.262 "trsvcid": "4420", 00:22:29.262 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.262 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.262 "hdgst": false, 00:22:29.262 "ddgst": false 00:22:29.262 }, 00:22:29.262 "method": "bdev_nvme_attach_controller" 00:22:29.262 }' 00:22:29.262 [2024-07-24 23:10:46.868434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.262 [2024-07-24 23:10:46.932841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.646 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 922251 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:30.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 922251 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:30.647 23:10:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 921862 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.589 { 00:22:31.589 "params": { 00:22:31.589 "name": "Nvme$subsystem", 00:22:31.589 "trtype": "$TEST_TRANSPORT", 00:22:31.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.589 "adrfam": "ipv4", 00:22:31.589 "trsvcid": "$NVMF_PORT", 00:22:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.589 "hdgst": ${hdgst:-false}, 00:22:31.589 "ddgst": ${ddgst:-false} 00:22:31.589 }, 00:22:31.589 "method": "bdev_nvme_attach_controller" 00:22:31.589 } 00:22:31.589 EOF 00:22:31.589 )") 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.589 { 00:22:31.589 "params": { 00:22:31.589 "name": "Nvme$subsystem", 00:22:31.589 "trtype": "$TEST_TRANSPORT", 00:22:31.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.589 "adrfam": "ipv4", 00:22:31.589 "trsvcid": "$NVMF_PORT", 00:22:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.589 "hdgst": ${hdgst:-false}, 00:22:31.589 "ddgst": ${ddgst:-false} 00:22:31.589 }, 00:22:31.589 "method": "bdev_nvme_attach_controller" 00:22:31.589 } 00:22:31.589 EOF 00:22:31.589 )") 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.589 { 00:22:31.589 "params": { 00:22:31.589 "name": "Nvme$subsystem", 00:22:31.589 "trtype": "$TEST_TRANSPORT", 00:22:31.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.589 "adrfam": "ipv4", 00:22:31.589 "trsvcid": "$NVMF_PORT", 00:22:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.589 "hdgst": ${hdgst:-false}, 00:22:31.589 "ddgst": ${ddgst:-false} 00:22:31.589 }, 00:22:31.589 "method": "bdev_nvme_attach_controller" 00:22:31.589 } 00:22:31.589 EOF 00:22:31.589 )") 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.589 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.589 { 00:22:31.589 "params": { 00:22:31.589 "name": "Nvme$subsystem", 00:22:31.589 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 [2024-07-24 23:10:49.215712] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:31.590 [2024-07-24 23:10:49.215775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922654 ] 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.590 { 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme$subsystem", 00:22:31.590 "trtype": "$TEST_TRANSPORT", 00:22:31.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "$NVMF_PORT", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.590 "hdgst": ${hdgst:-false}, 00:22:31.590 "ddgst": ${ddgst:-false} 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 } 00:22:31.590 EOF 00:22:31.590 )") 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.590 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:31.590 23:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme1", 00:22:31.590 "trtype": "tcp", 00:22:31.590 "traddr": "10.0.0.2", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "4420", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.590 "hdgst": false, 00:22:31.590 "ddgst": false 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 },{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme2", 00:22:31.590 "trtype": "tcp", 00:22:31.590 "traddr": "10.0.0.2", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "4420", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.590 "hdgst": false, 00:22:31.590 "ddgst": false 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 },{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme3", 00:22:31.590 "trtype": "tcp", 00:22:31.590 "traddr": "10.0.0.2", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "4420", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.590 "hdgst": false, 00:22:31.590 "ddgst": false 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 },{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme4", 00:22:31.590 "trtype": "tcp", 00:22:31.590 "traddr": "10.0.0.2", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "4420", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.590 "hdgst": false, 00:22:31.590 "ddgst": false 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 },{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme5", 00:22:31.590 "trtype": "tcp", 00:22:31.590 "traddr": "10.0.0.2", 00:22:31.590 "adrfam": "ipv4", 00:22:31.590 "trsvcid": "4420", 00:22:31.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.590 "hdgst": false, 00:22:31.590 "ddgst": false 00:22:31.590 }, 00:22:31.590 "method": "bdev_nvme_attach_controller" 00:22:31.590 },{ 00:22:31.590 "params": { 00:22:31.590 "name": "Nvme6", 00:22:31.590 "trtype": "tcp", 00:22:31.591 "traddr": "10.0.0.2", 00:22:31.591 "adrfam": "ipv4", 00:22:31.591 "trsvcid": "4420", 00:22:31.591 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.591 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.591 "hdgst": false, 00:22:31.591 "ddgst": false 00:22:31.591 }, 00:22:31.591 "method": "bdev_nvme_attach_controller" 00:22:31.591 },{ 00:22:31.591 "params": { 00:22:31.591 "name": "Nvme7", 00:22:31.591 "trtype": "tcp", 00:22:31.591 "traddr": "10.0.0.2", 00:22:31.591 "adrfam": "ipv4", 00:22:31.591 "trsvcid": "4420", 00:22:31.591 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.591 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.591 "hdgst": false, 00:22:31.591 "ddgst": false 00:22:31.591 }, 00:22:31.591 "method": "bdev_nvme_attach_controller" 00:22:31.591 },{ 00:22:31.591 "params": { 00:22:31.591 "name": "Nvme8", 00:22:31.591 "trtype": "tcp", 00:22:31.591 "traddr": "10.0.0.2", 00:22:31.591 "adrfam": "ipv4", 00:22:31.591 "trsvcid": "4420", 00:22:31.591 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.591 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.591 "hdgst": false, 00:22:31.591 "ddgst": false 00:22:31.591 }, 00:22:31.591 "method": "bdev_nvme_attach_controller" 00:22:31.591 },{ 00:22:31.591 "params": { 00:22:31.591 "name": "Nvme9", 00:22:31.591 "trtype": "tcp", 00:22:31.591 "traddr": "10.0.0.2", 00:22:31.591 "adrfam": "ipv4", 00:22:31.591 "trsvcid": "4420", 00:22:31.591 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.591 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.591 "hdgst": false, 00:22:31.591 "ddgst": false 00:22:31.591 }, 00:22:31.591 "method": "bdev_nvme_attach_controller" 00:22:31.591 },{ 00:22:31.591 "params": { 00:22:31.591 "name": "Nvme10", 00:22:31.591 "trtype": "tcp", 00:22:31.591 "traddr": "10.0.0.2", 00:22:31.591 "adrfam": "ipv4", 00:22:31.591 "trsvcid": "4420", 00:22:31.591 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.591 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.591 "hdgst": false, 00:22:31.591 "ddgst": false 00:22:31.591 }, 00:22:31.591 "method": "bdev_nvme_attach_controller" 00:22:31.591 }' 00:22:31.591 [2024-07-24 23:10:49.282662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.591 [2024-07-24 23:10:49.348416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.976 Running I/O for 1 seconds... 00:22:34.362 00:22:34.362 Latency(us) 00:22:34.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.362 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme1n1 : 1.15 222.30 13.89 0.00 0.00 280753.28 22719.15 248162.99 00:22:34.362 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme2n1 : 1.17 219.43 13.71 0.00 0.00 284147.20 36700.16 267386.88 00:22:34.362 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme3n1 : 1.09 240.31 15.02 0.00 0.00 249618.92 19223.89 244667.73 00:22:34.362 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme4n1 : 1.12 228.26 14.27 0.00 0.00 258391.25 18350.08 246415.36 00:22:34.362 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme5n1 : 1.16 219.95 13.75 0.00 0.00 269137.49 21080.75 270882.13 00:22:34.362 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme6n1 : 1.12 228.07 14.25 0.00 0.00 249274.03 22391.47 227191.47 00:22:34.362 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme7n1 : 1.16 275.76 17.24 0.00 0.00 207072.00 9994.24 253405.87 00:22:34.362 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme8n1 : 1.18 276.52 17.28 0.00 0.00 203002.40 2020.69 241172.48 00:22:34.362 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme9n1 : 1.18 270.88 16.93 0.00 0.00 203731.88 13653.33 248162.99 00:22:34.362 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.362 Verification LBA range: start 0x0 length 0x400 00:22:34.362 Nvme10n1 : 1.19 269.98 16.87 0.00 0.00 200758.95 11414.19 246415.36 00:22:34.362 =================================================================================================================== 00:22:34.362 Total : 2451.46 153.22 0.00 0.00 237191.25 2020.69 270882.13 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.362 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.362 rmmod nvme_tcp 00:22:34.363 rmmod nvme_fabrics 00:22:34.363 rmmod nvme_keyring 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 921862 ']' 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 921862 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 921862 ']' 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 921862 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.363 23:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 921862 00:22:34.363 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:34.363 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:34.363 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 921862' 00:22:34.363 killing process with pid 921862 00:22:34.363 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 921862 00:22:34.363 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 921862 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.623 23:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.170 00:22:37.170 real 0m17.301s 00:22:37.170 user 0m32.706s 00:22:37.170 sys 0m7.272s 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:37.170 ************************************ 00:22:37.170 END TEST nvmf_shutdown_tc1 00:22:37.170 ************************************ 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.170 ************************************ 00:22:37.170 START TEST nvmf_shutdown_tc2 00:22:37.170 ************************************ 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.170 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:37.171 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:37.171 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:37.171 Found net devices under 0000:31:00.0: cvl_0_0 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:37.171 Found net devices under 0000:31:00.1: cvl_0_1 00:22:37.171 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.795 ms 00:22:37.172 00:22:37.172 --- 10.0.0.2 ping statistics --- 00:22:37.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.172 rtt min/avg/max/mdev = 0.795/0.795/0.795/0.000 ms 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:22:37.172 00:22:37.172 --- 10.0.0.1 ping statistics --- 00:22:37.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.172 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=923913 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 923913 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 923913 ']' 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.172 23:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.172 [2024-07-24 23:10:54.852019] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:37.172 [2024-07-24 23:10:54.852086] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.172 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.172 [2024-07-24 23:10:54.947943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.433 [2024-07-24 23:10:55.019286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.433 [2024-07-24 23:10:55.019326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.433 [2024-07-24 23:10:55.019332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.433 [2024-07-24 23:10:55.019336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.433 [2024-07-24 23:10:55.019344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.433 [2024-07-24 23:10:55.019453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.433 [2024-07-24 23:10:55.019615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.433 [2024-07-24 23:10:55.019790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.433 [2024-07-24 23:10:55.019791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.004 [2024-07-24 23:10:55.674178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.004 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.005 23:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.005 Malloc1 00:22:38.005 [2024-07-24 23:10:55.760875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.005 Malloc2 00:22:38.265 Malloc3 00:22:38.265 Malloc4 00:22:38.265 Malloc5 00:22:38.265 Malloc6 00:22:38.265 Malloc7 00:22:38.265 Malloc8 00:22:38.265 Malloc9 00:22:38.525 Malloc10 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=924128 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 924128 /var/tmp/bdevperf.sock 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 924128 ']' 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.526 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.526 [2024-07-24 23:10:56.199242] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:38.527 [2024-07-24 23:10:56.199300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924128 ] 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.527 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:38.527 23:10:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme1", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme2", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme3", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme4", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme5", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme6", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme7", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme8", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme9", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme10", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 }' 00:22:38.527 [2024-07-24 23:10:56.267686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.788 [2024-07-24 23:10:56.333750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.171 Running I/O for 10 seconds... 00:22:40.171 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.171 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:40.171 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:40.171 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.171 23:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:40.434 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:40.730 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 924128 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 924128 ']' 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 924128 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.994 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 924128 00:22:41.255 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.255 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.255 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 924128' 00:22:41.255 killing process with pid 924128 00:22:41.255 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 924128 00:22:41.255 23:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 924128 00:22:41.255 Received shutdown signal, test time was about 0.958197 seconds 00:22:41.255 00:22:41.255 Latency(us) 00:22:41.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.255 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme1n1 : 0.95 269.81 16.86 0.00 0.00 234341.97 17039.36 242920.11 00:22:41.255 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme2n1 : 0.96 267.42 16.71 0.00 0.00 231672.32 18131.63 251658.24 00:22:41.255 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme3n1 : 0.95 269.17 16.82 0.00 0.00 225375.15 36700.16 228939.09 00:22:41.255 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme4n1 : 0.95 268.36 16.77 0.00 0.00 221415.89 21299.20 248162.99 00:22:41.255 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme5n1 : 0.94 204.22 12.76 0.00 0.00 284428.23 26105.17 255153.49 00:22:41.255 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme6n1 : 0.92 208.97 13.06 0.00 0.00 270834.63 20206.93 251658.24 00:22:41.255 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme7n1 : 0.94 271.30 16.96 0.00 0.00 204749.65 12834.13 255153.49 00:22:41.255 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme8n1 : 0.93 206.22 12.89 0.00 0.00 262425.03 18459.31 246415.36 00:22:41.255 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme9n1 : 0.92 207.99 13.00 0.00 0.00 253148.73 18350.08 248162.99 00:22:41.255 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.255 Verification LBA range: start 0x0 length 0x400 00:22:41.255 Nvme10n1 : 0.94 204.78 12.80 0.00 0.00 252020.34 20316.16 272629.76 00:22:41.255 =================================================================================================================== 00:22:41.255 Total : 2378.24 148.64 0.00 0.00 241108.31 12834.13 272629.76 00:22:41.255 23:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.638 rmmod nvme_tcp 00:22:42.638 rmmod nvme_fabrics 00:22:42.638 rmmod nvme_keyring 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 923913 ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 923913 ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923913' 00:22:42.638 killing process with pid 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 923913 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.638 23:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.183 00:22:45.183 real 0m8.055s 00:22:45.183 user 0m24.437s 00:22:45.183 sys 0m1.263s 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.183 ************************************ 00:22:45.183 END TEST nvmf_shutdown_tc2 00:22:45.183 ************************************ 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.183 ************************************ 00:22:45.183 START TEST nvmf_shutdown_tc3 00:22:45.183 ************************************ 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:45.183 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:45.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:45.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:45.184 Found net devices under 0000:31:00.0: cvl_0_0 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:45.184 Found net devices under 0000:31:00.1: cvl_0_1 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:22:45.184 00:22:45.184 --- 10.0.0.2 ping statistics --- 00:22:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.184 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:22:45.184 00:22:45.184 --- 10.0.0.1 ping statistics --- 00:22:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.184 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=925583 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 925583 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 925583 ']' 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.184 23:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.446 [2024-07-24 23:11:03.013997] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:45.446 [2024-07-24 23:11:03.014069] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.446 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.446 [2024-07-24 23:11:03.112285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.446 [2024-07-24 23:11:03.175084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.446 [2024-07-24 23:11:03.175115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.446 [2024-07-24 23:11:03.175120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.446 [2024-07-24 23:11:03.175125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.446 [2024-07-24 23:11:03.175129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.446 [2024-07-24 23:11:03.175260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.446 [2024-07-24 23:11:03.175419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.446 [2024-07-24 23:11:03.175473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.446 [2024-07-24 23:11:03.175474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.016 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.016 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:46.016 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.016 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.016 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 [2024-07-24 23:11:03.829372] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.277 23:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.277 Malloc1 00:22:46.277 [2024-07-24 23:11:03.923951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.277 Malloc2 00:22:46.277 Malloc3 00:22:46.277 Malloc4 00:22:46.277 Malloc5 00:22:46.539 Malloc6 00:22:46.539 Malloc7 00:22:46.539 Malloc8 00:22:46.539 Malloc9 00:22:46.539 Malloc10 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=925961 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 925961 /var/tmp/bdevperf.sock 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 925961 ']' 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.539 { 00:22:46.539 "params": { 00:22:46.539 "name": "Nvme$subsystem", 00:22:46.539 "trtype": "$TEST_TRANSPORT", 00:22:46.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.539 "adrfam": "ipv4", 00:22:46.539 "trsvcid": "$NVMF_PORT", 00:22:46.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.539 "hdgst": ${hdgst:-false}, 00:22:46.539 "ddgst": ${ddgst:-false} 00:22:46.539 }, 00:22:46.539 "method": "bdev_nvme_attach_controller" 00:22:46.539 } 00:22:46.539 EOF 00:22:46.539 )") 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.539 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.800 { 00:22:46.800 "params": { 00:22:46.800 "name": "Nvme$subsystem", 00:22:46.800 "trtype": "$TEST_TRANSPORT", 00:22:46.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.800 "adrfam": "ipv4", 00:22:46.800 "trsvcid": "$NVMF_PORT", 00:22:46.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.800 "hdgst": ${hdgst:-false}, 00:22:46.800 "ddgst": ${ddgst:-false} 00:22:46.800 }, 00:22:46.800 "method": "bdev_nvme_attach_controller" 00:22:46.800 } 00:22:46.800 EOF 00:22:46.800 )") 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.800 { 00:22:46.800 "params": { 00:22:46.800 "name": "Nvme$subsystem", 00:22:46.800 "trtype": "$TEST_TRANSPORT", 00:22:46.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.800 "adrfam": "ipv4", 00:22:46.800 "trsvcid": "$NVMF_PORT", 00:22:46.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.800 "hdgst": ${hdgst:-false}, 00:22:46.800 "ddgst": ${ddgst:-false} 00:22:46.800 }, 00:22:46.800 "method": "bdev_nvme_attach_controller" 00:22:46.800 } 00:22:46.800 EOF 00:22:46.800 )") 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.800 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.800 { 00:22:46.800 "params": { 00:22:46.800 "name": "Nvme$subsystem", 00:22:46.800 "trtype": "$TEST_TRANSPORT", 00:22:46.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.800 "adrfam": "ipv4", 00:22:46.800 "trsvcid": "$NVMF_PORT", 00:22:46.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.800 "hdgst": ${hdgst:-false}, 00:22:46.800 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 [2024-07-24 23:11:04.360361] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:22:46.801 [2024-07-24 23:11:04.360414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925961 ] 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.801 { 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme$subsystem", 00:22:46.801 "trtype": "$TEST_TRANSPORT", 00:22:46.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "$NVMF_PORT", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.801 "hdgst": ${hdgst:-false}, 00:22:46.801 "ddgst": ${ddgst:-false} 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 } 00:22:46.801 EOF 00:22:46.801 )") 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:46.801 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:46.801 23:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme1", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme2", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme3", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme4", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme5", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme6", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:46.801 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:46.801 "hdgst": false, 00:22:46.801 "ddgst": false 00:22:46.801 }, 00:22:46.801 "method": "bdev_nvme_attach_controller" 00:22:46.801 },{ 00:22:46.801 "params": { 00:22:46.801 "name": "Nvme7", 00:22:46.801 "trtype": "tcp", 00:22:46.801 "traddr": "10.0.0.2", 00:22:46.801 "adrfam": "ipv4", 00:22:46.801 "trsvcid": "4420", 00:22:46.801 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:46.802 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:46.802 "hdgst": false, 00:22:46.802 "ddgst": false 00:22:46.802 }, 00:22:46.802 "method": "bdev_nvme_attach_controller" 00:22:46.802 },{ 00:22:46.802 "params": { 00:22:46.802 "name": "Nvme8", 00:22:46.802 "trtype": "tcp", 00:22:46.802 "traddr": "10.0.0.2", 00:22:46.802 "adrfam": "ipv4", 00:22:46.802 "trsvcid": "4420", 00:22:46.802 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:46.802 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:46.802 "hdgst": false, 00:22:46.802 "ddgst": false 00:22:46.802 }, 00:22:46.802 "method": "bdev_nvme_attach_controller" 00:22:46.802 },{ 00:22:46.802 "params": { 00:22:46.802 "name": "Nvme9", 00:22:46.802 "trtype": "tcp", 00:22:46.802 "traddr": "10.0.0.2", 00:22:46.802 "adrfam": "ipv4", 00:22:46.802 "trsvcid": "4420", 00:22:46.802 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:46.802 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:46.802 "hdgst": false, 00:22:46.802 "ddgst": false 00:22:46.802 }, 00:22:46.802 "method": "bdev_nvme_attach_controller" 00:22:46.802 },{ 00:22:46.802 "params": { 00:22:46.802 "name": "Nvme10", 00:22:46.802 "trtype": "tcp", 00:22:46.802 "traddr": "10.0.0.2", 00:22:46.802 "adrfam": "ipv4", 00:22:46.802 "trsvcid": "4420", 00:22:46.802 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:46.802 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:46.802 "hdgst": false, 00:22:46.802 "ddgst": false 00:22:46.802 }, 00:22:46.802 "method": "bdev_nvme_attach_controller" 00:22:46.802 }' 00:22:46.802 [2024-07-24 23:11:04.427202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.802 [2024-07-24 23:11:04.492318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.715 Running I/O for 10 seconds... 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 925583 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 925583 ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 925583 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925583 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925583' 00:22:49.302 killing process with pid 925583 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 925583 00:22:49.302 23:11:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 925583 00:22:49.302 [2024-07-24 23:11:06.950465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.302 [2024-07-24 23:11:06.950643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.950962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e850 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.303 [2024-07-24 23:11:06.957406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.957527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521370 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.304 [2024-07-24 23:11:06.958886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.958924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ed10 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.960484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f1d0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.305 [2024-07-24 23:11:06.961420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.961671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251f6b0 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.306 [2024-07-24 23:11:06.962269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.962460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251fb70 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.307 [2024-07-24 23:11:06.963310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.963466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520030 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.308 [2024-07-24 23:11:06.964494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.964542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25204f0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.965554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520eb0 is same with the state(5) to be set 00:22:49.309 [2024-07-24 23:11:06.971559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.309 [2024-07-24 23:11:06.971597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.309 [2024-07-24 23:11:06.971608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.309 [2024-07-24 23:11:06.971616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.309 [2024-07-24 23:11:06.971624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.309 [2024-07-24 23:11:06.971632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.309 [2024-07-24 23:11:06.971640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.309 [2024-07-24 23:11:06.971647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.309 [2024-07-24 23:11:06.971655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1076e50 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.971687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09380 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.971794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10857c0 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.971890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.971963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10867e0 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.971987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.971997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0610 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.972085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeedbe0 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.972176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f4b0 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.972268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee9670 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.972360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd5d0 is same with the state(5) to be set 00:22:49.310 [2024-07-24 23:11:06.972461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.310 [2024-07-24 23:11:06.972507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.310 [2024-07-24 23:11:06.972516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.311 [2024-07-24 23:11:06.972524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.972534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee1950 is same with the state(5) to be set 00:22:49.311 [2024-07-24 23:11:06.973626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.973988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.973997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.311 [2024-07-24 23:11:06.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.311 [2024-07-24 23:11:06.974226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974842] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb8e60 was disconnected and freed. reset controller. 00:22:49.312 [2024-07-24 23:11:06.974909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.974983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.975000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.312 [2024-07-24 23:11:06.975016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.312 [2024-07-24 23:11:06.975024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.313 [2024-07-24 23:11:06.975707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.313 [2024-07-24 23:11:06.975717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.975989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.975999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.976008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.976018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.314 [2024-07-24 23:11:06.976026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.314 [2024-07-24 23:11:06.976077] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1992170 was disconnected and freed. reset controller. 00:22:49.314 [2024-07-24 23:11:06.978846] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.978873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:49.314 [2024-07-24 23:11:06.978889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:49.314 [2024-07-24 23:11:06.978905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10857c0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.978920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f4b0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.979204] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979275] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979324] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979363] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979401] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979435] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.979746] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.314 [2024-07-24 23:11:06.980406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.314 [2024-07-24 23:11:06.980426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107f4b0 with addr=10.0.0.2, port=4420 00:22:49.314 [2024-07-24 23:11:06.980435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f4b0 is same with the state(5) to be set 00:22:49.314 [2024-07-24 23:11:06.980785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.314 [2024-07-24 23:11:06.980797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10857c0 with addr=10.0.0.2, port=4420 00:22:49.314 [2024-07-24 23:11:06.980806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10857c0 is same with the state(5) to be set 00:22:49.314 [2024-07-24 23:11:06.980888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f4b0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.980900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10857c0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.980959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:49.314 [2024-07-24 23:11:06.980968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:49.314 [2024-07-24 23:11:06.980976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:49.314 [2024-07-24 23:11:06.980990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:49.314 [2024-07-24 23:11:06.980997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:49.314 [2024-07-24 23:11:06.981004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:49.314 [2024-07-24 23:11:06.981050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.314 [2024-07-24 23:11:06.981059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.314 [2024-07-24 23:11:06.981567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076e50 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf09380 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10867e0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0610 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeedbe0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee9670 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebd5d0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.981693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee1950 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.989363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:49.314 [2024-07-24 23:11:06.989379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:49.314 [2024-07-24 23:11:06.989967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.314 [2024-07-24 23:11:06.990007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10857c0 with addr=10.0.0.2, port=4420 00:22:49.314 [2024-07-24 23:11:06.990018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10857c0 is same with the state(5) to be set 00:22:49.314 [2024-07-24 23:11:06.990248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.314 [2024-07-24 23:11:06.990261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107f4b0 with addr=10.0.0.2, port=4420 00:22:49.314 [2024-07-24 23:11:06.990268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f4b0 is same with the state(5) to be set 00:22:49.314 [2024-07-24 23:11:06.990322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10857c0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.990332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f4b0 (9): Bad file descriptor 00:22:49.314 [2024-07-24 23:11:06.990372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:49.314 [2024-07-24 23:11:06.990380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:49.314 [2024-07-24 23:11:06.990389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:49.314 [2024-07-24 23:11:06.990404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:49.315 [2024-07-24 23:11:06.990411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:49.315 [2024-07-24 23:11:06.990418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:49.315 [2024-07-24 23:11:06.990462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.315 [2024-07-24 23:11:06.990471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.315 [2024-07-24 23:11:06.991760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.991985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.991993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.315 [2024-07-24 23:11:06.992407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.315 [2024-07-24 23:11:06.992414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.992875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.992884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc08e0 is same with the state(5) to be set 00:22:49.316 [2024-07-24 23:11:06.994180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.994196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.994209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.994218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.994229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.994237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.994248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.994258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.316 [2024-07-24 23:11:06.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.316 [2024-07-24 23:11:06.994277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.317 [2024-07-24 23:11:06.994958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.317 [2024-07-24 23:11:06.994967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.994974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.994984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.994992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.995301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.995310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103f390 is same with the state(5) to be set 00:22:49.318 [2024-07-24 23:11:06.996578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.318 [2024-07-24 23:11:06.996911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.318 [2024-07-24 23:11:06.996919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.996928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.996935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.996945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.996953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.996963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.996970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.996979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.996988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.996998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.319 [2024-07-24 23:11:06.997572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.319 [2024-07-24 23:11:06.997579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.997683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.997692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1040780 is same with the state(5) to be set 00:22:49.320 [2024-07-24 23:11:06.998978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.998992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.320 [2024-07-24 23:11:06.999457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.320 [2024-07-24 23:11:06.999465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:06.999989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:06.999997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.000101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.000109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041c00 is same with the state(5) to be set 00:22:49.321 [2024-07-24 23:11:07.001369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.321 [2024-07-24 23:11:07.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.321 [2024-07-24 23:11:07.001397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.001987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.001995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.322 [2024-07-24 23:11:07.002092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.322 [2024-07-24 23:11:07.002100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.002109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.002116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.002126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.002134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.002143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.002151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.002161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.008966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.009342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.009350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7970 is same with the state(5) to be set 00:22:49.323 [2024-07-24 23:11:07.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.323 [2024-07-24 23:11:07.010954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.323 [2024-07-24 23:11:07.010965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.010972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.010981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.010990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.324 [2024-07-24 23:11:07.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.324 [2024-07-24 23:11:07.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.011829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.011838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ea630 is same with the state(5) to be set 00:22:49.325 [2024-07-24 23:11:07.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.325 [2024-07-24 23:11:07.013507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.325 [2024-07-24 23:11:07.013517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.013984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.013991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.326 [2024-07-24 23:11:07.014184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-07-24 23:11:07.014193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.014200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.014210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.014217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.014226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.014234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.014243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105b760 is same with the state(5) to be set 00:22:49.327 [2024-07-24 23:11:07.015743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.015984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.015994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-24 23:11:07.016381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.327 [2024-07-24 23:11:07.016391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-24 23:11:07.016867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.328 [2024-07-24 23:11:07.016876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105cc40 is same with the state(5) to be set 00:22:49.328 [2024-07-24 23:11:07.018349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018472] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.328 [2024-07-24 23:11:07.018487] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.328 [2024-07-24 23:11:07.018500] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.328 [2024-07-24 23:11:07.018510] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.328 [2024-07-24 23:11:07.018620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:49.328 [2024-07-24 23:11:07.018642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:49.328 task offset: 24192 on job bdev=Nvme6n1 fails 00:22:49.328 00:22:49.328 Latency(us) 00:22:49.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.328 Job: Nvme1n1 ended in about 0.75 seconds with error 00:22:49.328 Verification LBA range: start 0x0 length 0x400 00:22:49.328 Nvme1n1 : 0.75 170.15 10.63 85.08 0.00 247255.32 21080.75 246415.36 00:22:49.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.328 Job: Nvme2n1 ended in about 0.75 seconds with error 00:22:49.328 Verification LBA range: start 0x0 length 0x400 00:22:49.328 Nvme2n1 : 0.75 169.61 10.60 84.81 0.00 241537.71 46530.56 213210.45 00:22:49.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.328 Job: Nvme3n1 ended in about 0.76 seconds with error 00:22:49.328 Verification LBA range: start 0x0 length 0x400 00:22:49.328 Nvme3n1 : 0.76 169.08 10.57 84.54 0.00 235931.31 19333.12 244667.73 00:22:49.328 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.328 Job: Nvme4n1 ended in about 0.76 seconds with error 00:22:49.328 Verification LBA range: start 0x0 length 0x400 00:22:49.328 Nvme4n1 : 0.76 168.54 10.53 84.27 0.00 230297.60 40632.32 200103.25 00:22:49.328 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.328 Job: Nvme5n1 ended in about 0.77 seconds with error 00:22:49.328 Verification LBA range: start 0x0 length 0x400 00:22:49.328 Nvme5n1 : 0.77 166.52 10.41 83.26 0.00 226884.84 39321.60 232434.35 00:22:49.329 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.329 Job: Nvme6n1 ended in about 0.74 seconds with error 00:22:49.329 Verification LBA range: start 0x0 length 0x400 00:22:49.329 Nvme6n1 : 0.74 173.96 10.87 86.98 0.00 209418.74 3850.24 251658.24 00:22:49.329 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.329 Job: Nvme7n1 ended in about 0.77 seconds with error 00:22:49.329 Verification LBA range: start 0x0 length 0x400 00:22:49.329 Nvme7n1 : 0.77 165.98 10.37 82.99 0.00 214749.30 19988.48 249910.61 00:22:49.329 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.329 Job: Nvme8n1 ended in about 0.74 seconds with error 00:22:49.329 Verification LBA range: start 0x0 length 0x400 00:22:49.329 Nvme8n1 : 0.74 173.68 10.86 86.84 0.00 196908.30 4560.21 235929.60 00:22:49.329 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.329 Job: Nvme9n1 ended in about 0.77 seconds with error 00:22:49.329 Verification LBA range: start 0x0 length 0x400 00:22:49.329 Nvme9n1 : 0.77 82.74 5.17 82.74 0.00 304090.45 25449.81 307582.29 00:22:49.329 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.329 Job: Nvme10n1 ended in about 0.78 seconds with error 00:22:49.329 Verification LBA range: start 0x0 length 0x400 00:22:49.329 Nvme10n1 : 0.78 88.90 5.56 82.46 0.00 284684.52 21080.75 295348.91 00:22:49.329 =================================================================================================================== 00:22:49.329 Total : 1529.16 95.57 843.96 0.00 235369.71 3850.24 307582.29 00:22:49.329 [2024-07-24 23:11:07.043505] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:49.329 [2024-07-24 23:11:07.043540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:49.329 [2024-07-24 23:11:07.044069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebd5d0 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.044099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd5d0 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.044496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.044507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee1950 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.044515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee1950 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.044929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.044941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee9670 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.044948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee9670 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.045349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.045360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeedbe0 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.045369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeedbe0 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.047943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.047959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10867e0 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.047967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10867e0 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.048364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.048375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c0610 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.048382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0610 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.048803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.048814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1076e50 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.048827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1076e50 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.049118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.049128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf09380 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.049135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf09380 is same with the state(5) to be set 00:22:49.329 [2024-07-24 23:11:07.049150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebd5d0 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee1950 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee9670 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeedbe0 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049204] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049217] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049232] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049243] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049254] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049265] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.329 [2024-07-24 23:11:07.049558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:49.329 [2024-07-24 23:11:07.049572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:49.329 [2024-07-24 23:11:07.049612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10867e0 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0610 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076e50 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf09380 (9): Bad file descriptor 00:22:49.329 [2024-07-24 23:11:07.049651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.329 [2024-07-24 23:11:07.049659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.329 [2024-07-24 23:11:07.049667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.329 [2024-07-24 23:11:07.049679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:49.329 [2024-07-24 23:11:07.049685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:49.329 [2024-07-24 23:11:07.049693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:49.329 [2024-07-24 23:11:07.049703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:49.329 [2024-07-24 23:11:07.049710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:49.329 [2024-07-24 23:11:07.049716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:49.329 [2024-07-24 23:11:07.049726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:49.329 [2024-07-24 23:11:07.049736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:49.329 [2024-07-24 23:11:07.049744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:49.329 [2024-07-24 23:11:07.049833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.329 [2024-07-24 23:11:07.049843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.329 [2024-07-24 23:11:07.049849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.329 [2024-07-24 23:11:07.049857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.329 [2024-07-24 23:11:07.050230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.329 [2024-07-24 23:11:07.050243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107f4b0 with addr=10.0.0.2, port=4420 00:22:49.329 [2024-07-24 23:11:07.050251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f4b0 is same with the state(5) to be set 00:22:49.330 [2024-07-24 23:11:07.050636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.330 [2024-07-24 23:11:07.050646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10857c0 with addr=10.0.0.2, port=4420 00:22:49.330 [2024-07-24 23:11:07.050654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10857c0 is same with the state(5) to be set 00:22:49.330 [2024-07-24 23:11:07.050662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.330 [2024-07-24 23:11:07.050786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.330 [2024-07-24 23:11:07.050793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.330 [2024-07-24 23:11:07.050798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.330 [2024-07-24 23:11:07.050806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f4b0 (9): Bad file descriptor 00:22:49.330 [2024-07-24 23:11:07.050816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10857c0 (9): Bad file descriptor 00:22:49.330 [2024-07-24 23:11:07.050843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:49.330 [2024-07-24 23:11:07.050877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:49.330 [2024-07-24 23:11:07.050883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:49.330 [2024-07-24 23:11:07.050911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.330 [2024-07-24 23:11:07.050919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.591 23:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:49.591 23:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 925961 00:22:50.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (925961) - No such process 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.533 rmmod nvme_tcp 00:22:50.533 rmmod nvme_fabrics 00:22:50.533 rmmod nvme_keyring 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.533 23:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.083 00:22:53.083 real 0m7.826s 00:22:53.083 user 0m19.128s 00:22:53.083 sys 0m1.197s 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 ************************************ 00:22:53.083 END TEST nvmf_shutdown_tc3 00:22:53.083 ************************************ 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:53.083 00:22:53.083 real 0m33.560s 00:22:53.083 user 1m16.431s 00:22:53.083 sys 0m9.973s 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 ************************************ 00:22:53.083 END TEST nvmf_shutdown 00:22:53.083 ************************************ 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:53.083 00:22:53.083 real 11m47.192s 00:22:53.083 user 24m43.100s 00:22:53.083 sys 3m29.934s 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.083 23:11:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 ************************************ 00:22:53.083 END TEST nvmf_target_extra 00:22:53.083 ************************************ 00:22:53.083 23:11:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:53.083 23:11:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:53.083 23:11:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.083 23:11:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 ************************************ 00:22:53.083 START TEST nvmf_host 00:22:53.083 ************************************ 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:53.083 * Looking for test storage... 00:22:53.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 ************************************ 00:22:53.083 START TEST nvmf_multicontroller 00:22:53.083 ************************************ 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:53.083 * Looking for test storage... 00:22:53.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.083 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.084 23:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:01.241 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:01.241 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.241 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:01.242 Found net devices under 0000:31:00.0: cvl_0_0 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:01.242 Found net devices under 0000:31:00.1: cvl_0_1 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.752 ms 00:23:01.242 00:23:01.242 --- 10.0.0.2 ping statistics --- 00:23:01.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.242 rtt min/avg/max/mdev = 0.752/0.752/0.752/0.000 ms 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:23:01.242 00:23:01.242 --- 10.0.0.1 ping statistics --- 00:23:01.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.242 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=931397 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 931397 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 931397 ']' 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.242 23:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.503 [2024-07-24 23:11:19.034681] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:01.503 [2024-07-24 23:11:19.034759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.503 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.504 [2024-07-24 23:11:19.131897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.504 [2024-07-24 23:11:19.228506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.504 [2024-07-24 23:11:19.228557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.504 [2024-07-24 23:11:19.228565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.504 [2024-07-24 23:11:19.228572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.504 [2024-07-24 23:11:19.228578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.504 [2024-07-24 23:11:19.228743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.504 [2024-07-24 23:11:19.228922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.504 [2024-07-24 23:11:19.229033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.075 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.075 [2024-07-24 23:11:19.847758] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 Malloc0 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 [2024-07-24 23:11:19.920925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 [2024-07-24 23:11:19.932845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 Malloc1 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.337 23:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=931749 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 931749 /var/tmp/bdevperf.sock 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 931749 ']' 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.337 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.280 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.280 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:03.280 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:03.280 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.280 23:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.280 NVMe0n1 00:23:03.280 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.280 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.280 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:03.280 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.280 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.281 1 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.281 request: 00:23:03.281 { 00:23:03.281 "name": "NVMe0", 00:23:03.281 "trtype": "tcp", 00:23:03.281 "traddr": "10.0.0.2", 00:23:03.281 "adrfam": "ipv4", 00:23:03.281 "trsvcid": "4420", 00:23:03.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.281 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:03.281 "hostaddr": "10.0.0.2", 00:23:03.281 "hostsvcid": "60000", 00:23:03.281 "prchk_reftag": false, 00:23:03.281 "prchk_guard": false, 00:23:03.281 "hdgst": false, 00:23:03.281 "ddgst": false, 00:23:03.281 "method": "bdev_nvme_attach_controller", 00:23:03.281 "req_id": 1 00:23:03.281 } 00:23:03.281 Got JSON-RPC error response 00:23:03.281 response: 00:23:03.281 { 00:23:03.281 "code": -114, 00:23:03.281 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:03.281 } 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.281 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.542 request: 00:23:03.542 { 00:23:03.542 "name": "NVMe0", 00:23:03.542 "trtype": "tcp", 00:23:03.542 "traddr": "10.0.0.2", 00:23:03.542 "adrfam": "ipv4", 00:23:03.542 "trsvcid": "4420", 00:23:03.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.542 "hostaddr": "10.0.0.2", 00:23:03.542 "hostsvcid": "60000", 00:23:03.542 "prchk_reftag": false, 00:23:03.542 "prchk_guard": false, 00:23:03.542 "hdgst": false, 00:23:03.542 "ddgst": false, 00:23:03.542 "method": "bdev_nvme_attach_controller", 00:23:03.542 "req_id": 1 00:23:03.542 } 00:23:03.542 Got JSON-RPC error response 00:23:03.542 response: 00:23:03.542 { 00:23:03.542 "code": -114, 00:23:03.542 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:03.542 } 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.542 request: 00:23:03.542 { 00:23:03.542 "name": "NVMe0", 00:23:03.542 "trtype": "tcp", 00:23:03.542 "traddr": "10.0.0.2", 00:23:03.542 "adrfam": "ipv4", 00:23:03.542 "trsvcid": "4420", 00:23:03.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.542 "hostaddr": "10.0.0.2", 00:23:03.542 "hostsvcid": "60000", 00:23:03.542 "prchk_reftag": false, 00:23:03.542 "prchk_guard": false, 00:23:03.542 "hdgst": false, 00:23:03.542 "ddgst": false, 00:23:03.542 "multipath": "disable", 00:23:03.542 "method": "bdev_nvme_attach_controller", 00:23:03.542 "req_id": 1 00:23:03.542 } 00:23:03.542 Got JSON-RPC error response 00:23:03.542 response: 00:23:03.542 { 00:23:03.542 "code": -114, 00:23:03.542 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:03.542 } 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.542 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.542 request: 00:23:03.542 { 00:23:03.542 "name": "NVMe0", 00:23:03.542 "trtype": "tcp", 00:23:03.542 "traddr": "10.0.0.2", 00:23:03.542 "adrfam": "ipv4", 00:23:03.542 "trsvcid": "4420", 00:23:03.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.542 "hostaddr": "10.0.0.2", 00:23:03.542 "hostsvcid": "60000", 00:23:03.542 "prchk_reftag": false, 00:23:03.542 "prchk_guard": false, 00:23:03.542 "hdgst": false, 00:23:03.542 "ddgst": false, 00:23:03.542 "multipath": "failover", 00:23:03.542 "method": "bdev_nvme_attach_controller", 00:23:03.542 "req_id": 1 00:23:03.542 } 00:23:03.542 Got JSON-RPC error response 00:23:03.542 response: 00:23:03.542 { 00:23:03.542 "code": -114, 00:23:03.543 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:03.543 } 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.543 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.543 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.804 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:03.804 23:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.810 0 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 931749 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 931749 ']' 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 931749 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931749 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931749' 00:23:04.810 killing process with pid 931749 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 931749 00:23:04.810 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 931749 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:05.071 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:05.071 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:05.071 [2024-07-24 23:11:20.052071] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:05.071 [2024-07-24 23:11:20.052131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931749 ] 00:23:05.071 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.071 [2024-07-24 23:11:20.118197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.071 [2024-07-24 23:11:20.182241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.071 [2024-07-24 23:11:21.351636] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 0f330faa-0e4f-4a5a-9c39-da6c560858d7 already exists 00:23:05.071 [2024-07-24 23:11:21.351668] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:0f330faa-0e4f-4a5a-9c39-da6c560858d7 alias for bdev NVMe1n1 00:23:05.071 [2024-07-24 23:11:21.351676] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:05.071 Running I/O for 1 seconds... 00:23:05.071 00:23:05.071 Latency(us) 00:23:05.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.071 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:05.071 NVMe0n1 : 1.00 29532.99 115.36 0.00 0.00 4323.91 2143.57 11359.57 00:23:05.071 =================================================================================================================== 00:23:05.071 Total : 29532.99 115.36 0.00 0.00 4323.91 2143.57 11359.57 00:23:05.071 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.071 00:23:05.071 Latency(us) 00:23:05.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.072 =================================================================================================================== 00:23:05.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.072 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.072 rmmod nvme_tcp 00:23:05.072 rmmod nvme_fabrics 00:23:05.072 rmmod nvme_keyring 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 931397 ']' 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 931397 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 931397 ']' 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 931397 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.072 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931397 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931397' 00:23:05.333 killing process with pid 931397 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 931397 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 931397 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.333 23:11:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.877 00:23:07.877 real 0m14.359s 00:23:07.877 user 0m16.544s 00:23:07.877 sys 0m6.803s 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.877 ************************************ 00:23:07.877 END TEST nvmf_multicontroller 00:23:07.877 ************************************ 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.877 ************************************ 00:23:07.877 START TEST nvmf_aer 00:23:07.877 ************************************ 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.877 * Looking for test storage... 00:23:07.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.877 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.878 23:11:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:16.017 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:16.017 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:16.017 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:16.018 Found net devices under 0000:31:00.0: cvl_0_0 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:16.018 Found net devices under 0000:31:00.1: cvl_0_1 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.781 ms 00:23:16.018 00:23:16.018 --- 10.0.0.2 ping statistics --- 00:23:16.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.018 rtt min/avg/max/mdev = 0.781/0.781/0.781/0.000 ms 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:16.018 00:23:16.018 --- 10.0.0.1 ping statistics --- 00:23:16.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.018 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=936888 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 936888 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 936888 ']' 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.018 23:11:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.018 [2024-07-24 23:11:33.590745] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:16.018 [2024-07-24 23:11:33.590836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.018 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.018 [2024-07-24 23:11:33.672717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.018 [2024-07-24 23:11:33.748735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.018 [2024-07-24 23:11:33.748783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.018 [2024-07-24 23:11:33.748792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.018 [2024-07-24 23:11:33.748801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.018 [2024-07-24 23:11:33.748807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.018 [2024-07-24 23:11:33.748875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.018 [2024-07-24 23:11:33.748997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.018 [2024-07-24 23:11:33.749152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.018 [2024-07-24 23:11:33.749153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.590 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.590 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:16.590 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.590 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.590 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 [2024-07-24 23:11:34.422797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 Malloc0 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 [2024-07-24 23:11:34.482053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:16.851 [ 00:23:16.851 { 00:23:16.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:16.851 "subtype": "Discovery", 00:23:16.851 "listen_addresses": [], 00:23:16.851 "allow_any_host": true, 00:23:16.851 "hosts": [] 00:23:16.851 }, 00:23:16.851 { 00:23:16.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.851 "subtype": "NVMe", 00:23:16.851 "listen_addresses": [ 00:23:16.851 { 00:23:16.851 "trtype": "TCP", 00:23:16.851 "adrfam": "IPv4", 00:23:16.851 "traddr": "10.0.0.2", 00:23:16.851 "trsvcid": "4420" 00:23:16.851 } 00:23:16.851 ], 00:23:16.851 "allow_any_host": true, 00:23:16.851 "hosts": [], 00:23:16.851 "serial_number": "SPDK00000000000001", 00:23:16.851 "model_number": "SPDK bdev Controller", 00:23:16.851 "max_namespaces": 2, 00:23:16.851 "min_cntlid": 1, 00:23:16.851 "max_cntlid": 65519, 00:23:16.851 "namespaces": [ 00:23:16.851 { 00:23:16.851 "nsid": 1, 00:23:16.851 "bdev_name": "Malloc0", 00:23:16.851 "name": "Malloc0", 00:23:16.851 "nguid": "A3D4B9E05B744E76A5C1573EE8D5A928", 00:23:16.851 "uuid": "a3d4b9e0-5b74-4e76-a5c1-573ee8d5a928" 00:23:16.851 } 00:23:16.851 ] 00:23:16.851 } 00:23:16.851 ] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=937138 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:16.851 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:16.851 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 Malloc1 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 [ 00:23:17.112 { 00:23:17.112 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:17.112 "subtype": "Discovery", 00:23:17.112 "listen_addresses": [], 00:23:17.112 "allow_any_host": true, 00:23:17.112 "hosts": [] 00:23:17.112 }, 00:23:17.112 { 00:23:17.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.112 "subtype": "NVMe", 00:23:17.112 "listen_addresses": [ 00:23:17.112 { 00:23:17.112 "trtype": "TCP", 00:23:17.112 "adrfam": "IPv4", 00:23:17.112 "traddr": "10.0.0.2", 00:23:17.112 "trsvcid": "4420" 00:23:17.112 } 00:23:17.112 ], 00:23:17.112 "allow_any_host": true, 00:23:17.112 "hosts": [], 00:23:17.112 "serial_number": "SPDK00000000000001", 00:23:17.112 "model_number": "SPDK bdev Controller", 00:23:17.112 "max_namespaces": 2, 00:23:17.112 "min_cntlid": 1, 00:23:17.112 "max_cntlid": 65519, 00:23:17.112 "namespaces": [ 00:23:17.112 { 00:23:17.112 "nsid": 1, 00:23:17.112 "bdev_name": "Malloc0", 00:23:17.112 "name": "Malloc0", 00:23:17.112 "nguid": "A3D4B9E05B744E76A5C1573EE8D5A928", 00:23:17.112 "uuid": "a3d4b9e0-5b74-4e76-a5c1-573ee8d5a928" 00:23:17.112 }, 00:23:17.112 { 00:23:17.112 "nsid": 2, 00:23:17.112 "bdev_name": "Malloc1", 00:23:17.112 "name": "Malloc1", 00:23:17.112 "nguid": "F9EB897DD2C247EE96F6CD20A80186DB", 00:23:17.112 "uuid": "f9eb897d-d2c2-47ee-96f6-cd20a80186db" 00:23:17.112 Asynchronous Event Request test 00:23:17.112 Attaching to 10.0.0.2 00:23:17.112 Attached to 10.0.0.2 00:23:17.112 Registering asynchronous event callbacks... 00:23:17.112 Starting namespace attribute notice tests for all controllers... 00:23:17.112 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:17.112 aer_cb - Changed Namespace 00:23:17.112 Cleaning up... 00:23:17.112 } 00:23:17.112 ] 00:23:17.112 } 00:23:17.112 ] 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 937138 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.112 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.373 rmmod nvme_tcp 00:23:17.373 rmmod nvme_fabrics 00:23:17.373 rmmod nvme_keyring 00:23:17.373 23:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 936888 ']' 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 936888 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 936888 ']' 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 936888 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936888 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936888' 00:23:17.373 killing process with pid 936888 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 936888 00:23:17.373 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 936888 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.634 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.635 23:11:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.549 00:23:19.549 real 0m12.126s 00:23:19.549 user 0m8.164s 00:23:19.549 sys 0m6.529s 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 ************************************ 00:23:19.549 END TEST nvmf_aer 00:23:19.549 ************************************ 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.549 23:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.811 ************************************ 00:23:19.811 START TEST nvmf_async_init 00:23:19.811 ************************************ 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:19.811 * Looking for test storage... 00:23:19.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.811 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=66d026d6df3b4479aebba2ad34f8b104 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.812 23:11:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:27.956 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:27.956 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:27.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:27.957 Found net devices under 0000:31:00.0: cvl_0_0 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:27.957 Found net devices under 0000:31:00.1: cvl_0_1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:23:27.957 00:23:27.957 --- 10.0.0.2 ping statistics --- 00:23:27.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.957 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:23:27.957 00:23:27.957 --- 10.0.0.1 ping statistics --- 00:23:27.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.957 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=941817 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 941817 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 941817 ']' 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.957 23:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 [2024-07-24 23:11:45.513874] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:27.957 [2024-07-24 23:11:45.513930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.957 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.957 [2024-07-24 23:11:45.586898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.957 [2024-07-24 23:11:45.653988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.957 [2024-07-24 23:11:45.654025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.957 [2024-07-24 23:11:45.654032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.957 [2024-07-24 23:11:45.654039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.957 [2024-07-24 23:11:45.654044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.957 [2024-07-24 23:11:45.654062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.529 [2024-07-24 23:11:46.300361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.529 null0 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.529 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.789 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 66d026d6df3b4479aebba2ad34f8b104 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.790 [2024-07-24 23:11:46.340574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:28.790 nvme0n1 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.790 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 [ 00:23:29.051 { 00:23:29.051 "name": "nvme0n1", 00:23:29.051 "aliases": [ 00:23:29.051 "66d026d6-df3b-4479-aebb-a2ad34f8b104" 00:23:29.051 ], 00:23:29.051 "product_name": "NVMe disk", 00:23:29.051 "block_size": 512, 00:23:29.051 "num_blocks": 2097152, 00:23:29.051 "uuid": "66d026d6-df3b-4479-aebb-a2ad34f8b104", 00:23:29.051 "assigned_rate_limits": { 00:23:29.051 "rw_ios_per_sec": 0, 00:23:29.051 "rw_mbytes_per_sec": 0, 00:23:29.051 "r_mbytes_per_sec": 0, 00:23:29.051 "w_mbytes_per_sec": 0 00:23:29.051 }, 00:23:29.051 "claimed": false, 00:23:29.051 "zoned": false, 00:23:29.051 "supported_io_types": { 00:23:29.051 "read": true, 00:23:29.051 "write": true, 00:23:29.051 "unmap": false, 00:23:29.051 "flush": true, 00:23:29.051 "reset": true, 00:23:29.051 "nvme_admin": true, 00:23:29.051 "nvme_io": true, 00:23:29.051 "nvme_io_md": false, 00:23:29.051 "write_zeroes": true, 00:23:29.051 "zcopy": false, 00:23:29.051 "get_zone_info": false, 00:23:29.051 "zone_management": false, 00:23:29.051 "zone_append": false, 00:23:29.051 "compare": true, 00:23:29.051 "compare_and_write": true, 00:23:29.051 "abort": true, 00:23:29.051 "seek_hole": false, 00:23:29.051 "seek_data": false, 00:23:29.051 "copy": true, 00:23:29.051 "nvme_iov_md": false 00:23:29.051 }, 00:23:29.051 "memory_domains": [ 00:23:29.051 { 00:23:29.051 "dma_device_id": "system", 00:23:29.051 "dma_device_type": 1 00:23:29.051 } 00:23:29.051 ], 00:23:29.051 "driver_specific": { 00:23:29.051 "nvme": [ 00:23:29.051 { 00:23:29.051 "trid": { 00:23:29.051 "trtype": "TCP", 00:23:29.051 "adrfam": "IPv4", 00:23:29.051 "traddr": "10.0.0.2", 00:23:29.051 "trsvcid": "4420", 00:23:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:29.051 }, 00:23:29.051 "ctrlr_data": { 00:23:29.051 "cntlid": 1, 00:23:29.051 "vendor_id": "0x8086", 00:23:29.051 "model_number": "SPDK bdev Controller", 00:23:29.051 "serial_number": "00000000000000000000", 00:23:29.051 "firmware_revision": "24.09", 00:23:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.051 "oacs": { 00:23:29.051 "security": 0, 00:23:29.051 "format": 0, 00:23:29.051 "firmware": 0, 00:23:29.051 "ns_manage": 0 00:23:29.051 }, 00:23:29.051 "multi_ctrlr": true, 00:23:29.051 "ana_reporting": false 00:23:29.051 }, 00:23:29.051 "vs": { 00:23:29.051 "nvme_version": "1.3" 00:23:29.051 }, 00:23:29.051 "ns_data": { 00:23:29.051 "id": 1, 00:23:29.051 "can_share": true 00:23:29.051 } 00:23:29.051 } 00:23:29.051 ], 00:23:29.051 "mp_policy": "active_passive" 00:23:29.051 } 00:23:29.051 } 00:23:29.051 ] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 [2024-07-24 23:11:46.589406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.051 [2024-07-24 23:11:46.589467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efb30 (9): Bad file descriptor 00:23:29.051 [2024-07-24 23:11:46.721857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 [ 00:23:29.051 { 00:23:29.051 "name": "nvme0n1", 00:23:29.051 "aliases": [ 00:23:29.051 "66d026d6-df3b-4479-aebb-a2ad34f8b104" 00:23:29.051 ], 00:23:29.051 "product_name": "NVMe disk", 00:23:29.051 "block_size": 512, 00:23:29.051 "num_blocks": 2097152, 00:23:29.051 "uuid": "66d026d6-df3b-4479-aebb-a2ad34f8b104", 00:23:29.051 "assigned_rate_limits": { 00:23:29.051 "rw_ios_per_sec": 0, 00:23:29.051 "rw_mbytes_per_sec": 0, 00:23:29.051 "r_mbytes_per_sec": 0, 00:23:29.051 "w_mbytes_per_sec": 0 00:23:29.051 }, 00:23:29.051 "claimed": false, 00:23:29.051 "zoned": false, 00:23:29.051 "supported_io_types": { 00:23:29.051 "read": true, 00:23:29.051 "write": true, 00:23:29.051 "unmap": false, 00:23:29.051 "flush": true, 00:23:29.051 "reset": true, 00:23:29.051 "nvme_admin": true, 00:23:29.051 "nvme_io": true, 00:23:29.051 "nvme_io_md": false, 00:23:29.051 "write_zeroes": true, 00:23:29.051 "zcopy": false, 00:23:29.051 "get_zone_info": false, 00:23:29.051 "zone_management": false, 00:23:29.051 "zone_append": false, 00:23:29.051 "compare": true, 00:23:29.051 "compare_and_write": true, 00:23:29.051 "abort": true, 00:23:29.051 "seek_hole": false, 00:23:29.051 "seek_data": false, 00:23:29.051 "copy": true, 00:23:29.051 "nvme_iov_md": false 00:23:29.051 }, 00:23:29.051 "memory_domains": [ 00:23:29.051 { 00:23:29.051 "dma_device_id": "system", 00:23:29.051 "dma_device_type": 1 00:23:29.051 } 00:23:29.051 ], 00:23:29.051 "driver_specific": { 00:23:29.051 "nvme": [ 00:23:29.051 { 00:23:29.051 "trid": { 00:23:29.051 "trtype": "TCP", 00:23:29.051 "adrfam": "IPv4", 00:23:29.051 "traddr": "10.0.0.2", 00:23:29.051 "trsvcid": "4420", 00:23:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:29.051 }, 00:23:29.051 "ctrlr_data": { 00:23:29.051 "cntlid": 2, 00:23:29.051 "vendor_id": "0x8086", 00:23:29.051 "model_number": "SPDK bdev Controller", 00:23:29.051 "serial_number": "00000000000000000000", 00:23:29.051 "firmware_revision": "24.09", 00:23:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.051 "oacs": { 00:23:29.051 "security": 0, 00:23:29.051 "format": 0, 00:23:29.051 "firmware": 0, 00:23:29.051 "ns_manage": 0 00:23:29.051 }, 00:23:29.051 "multi_ctrlr": true, 00:23:29.051 "ana_reporting": false 00:23:29.051 }, 00:23:29.051 "vs": { 00:23:29.051 "nvme_version": "1.3" 00:23:29.051 }, 00:23:29.051 "ns_data": { 00:23:29.051 "id": 1, 00:23:29.051 "can_share": true 00:23:29.051 } 00:23:29.051 } 00:23:29.051 ], 00:23:29.051 "mp_policy": "active_passive" 00:23:29.051 } 00:23:29.051 } 00:23:29.051 ] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lNxRgFve6G 00:23:29.051 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lNxRgFve6G 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 [2024-07-24 23:11:46.778008] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.052 [2024-07-24 23:11:46.778143] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lNxRgFve6G 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 [2024-07-24 23:11:46.786020] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lNxRgFve6G 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.052 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 [2024-07-24 23:11:46.794061] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.052 [2024-07-24 23:11:46.794099] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:29.313 nvme0n1 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.313 [ 00:23:29.313 { 00:23:29.313 "name": "nvme0n1", 00:23:29.313 "aliases": [ 00:23:29.313 "66d026d6-df3b-4479-aebb-a2ad34f8b104" 00:23:29.313 ], 00:23:29.313 "product_name": "NVMe disk", 00:23:29.313 "block_size": 512, 00:23:29.313 "num_blocks": 2097152, 00:23:29.313 "uuid": "66d026d6-df3b-4479-aebb-a2ad34f8b104", 00:23:29.313 "assigned_rate_limits": { 00:23:29.313 "rw_ios_per_sec": 0, 00:23:29.313 "rw_mbytes_per_sec": 0, 00:23:29.313 "r_mbytes_per_sec": 0, 00:23:29.313 "w_mbytes_per_sec": 0 00:23:29.313 }, 00:23:29.313 "claimed": false, 00:23:29.313 "zoned": false, 00:23:29.313 "supported_io_types": { 00:23:29.313 "read": true, 00:23:29.313 "write": true, 00:23:29.313 "unmap": false, 00:23:29.313 "flush": true, 00:23:29.313 "reset": true, 00:23:29.313 "nvme_admin": true, 00:23:29.313 "nvme_io": true, 00:23:29.313 "nvme_io_md": false, 00:23:29.313 "write_zeroes": true, 00:23:29.313 "zcopy": false, 00:23:29.313 "get_zone_info": false, 00:23:29.313 "zone_management": false, 00:23:29.313 "zone_append": false, 00:23:29.313 "compare": true, 00:23:29.313 "compare_and_write": true, 00:23:29.313 "abort": true, 00:23:29.313 "seek_hole": false, 00:23:29.313 "seek_data": false, 00:23:29.313 "copy": true, 00:23:29.313 "nvme_iov_md": false 00:23:29.313 }, 00:23:29.313 "memory_domains": [ 00:23:29.313 { 00:23:29.313 "dma_device_id": "system", 00:23:29.313 "dma_device_type": 1 00:23:29.313 } 00:23:29.313 ], 00:23:29.313 "driver_specific": { 00:23:29.313 "nvme": [ 00:23:29.313 { 00:23:29.313 "trid": { 00:23:29.313 "trtype": "TCP", 00:23:29.313 "adrfam": "IPv4", 00:23:29.313 "traddr": "10.0.0.2", 00:23:29.313 "trsvcid": "4421", 00:23:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:29.313 }, 00:23:29.313 "ctrlr_data": { 00:23:29.313 "cntlid": 3, 00:23:29.313 "vendor_id": "0x8086", 00:23:29.313 "model_number": "SPDK bdev Controller", 00:23:29.313 "serial_number": "00000000000000000000", 00:23:29.313 "firmware_revision": "24.09", 00:23:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.313 "oacs": { 00:23:29.313 "security": 0, 00:23:29.313 "format": 0, 00:23:29.313 "firmware": 0, 00:23:29.313 "ns_manage": 0 00:23:29.313 }, 00:23:29.313 "multi_ctrlr": true, 00:23:29.313 "ana_reporting": false 00:23:29.313 }, 00:23:29.313 "vs": { 00:23:29.313 "nvme_version": "1.3" 00:23:29.313 }, 00:23:29.313 "ns_data": { 00:23:29.313 "id": 1, 00:23:29.313 "can_share": true 00:23:29.313 } 00:23:29.313 } 00:23:29.313 ], 00:23:29.313 "mp_policy": "active_passive" 00:23:29.313 } 00:23:29.313 } 00:23:29.313 ] 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lNxRgFve6G 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.313 rmmod nvme_tcp 00:23:29.313 rmmod nvme_fabrics 00:23:29.313 rmmod nvme_keyring 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 941817 ']' 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 941817 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 941817 ']' 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 941817 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.313 23:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 941817 00:23:29.313 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.313 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.313 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 941817' 00:23:29.313 killing process with pid 941817 00:23:29.313 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 941817 00:23:29.313 [2024-07-24 23:11:47.027996] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.313 [2024-07-24 23:11:47.028023] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.313 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 941817 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.575 23:11:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.489 00:23:31.489 real 0m11.870s 00:23:31.489 user 0m4.048s 00:23:31.489 sys 0m6.188s 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.489 ************************************ 00:23:31.489 END TEST nvmf_async_init 00:23:31.489 ************************************ 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.489 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.750 ************************************ 00:23:31.750 START TEST dma 00:23:31.750 ************************************ 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:31.750 * Looking for test storage... 00:23:31.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.750 23:11:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:31.751 00:23:31.751 real 0m0.135s 00:23:31.751 user 0m0.057s 00:23:31.751 sys 0m0.086s 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:31.751 ************************************ 00:23:31.751 END TEST dma 00:23:31.751 ************************************ 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.751 ************************************ 00:23:31.751 START TEST nvmf_identify 00:23:31.751 ************************************ 00:23:31.751 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:32.013 * Looking for test storage... 00:23:32.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.013 23:11:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:40.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:40.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:40.223 Found net devices under 0000:31:00.0: cvl_0_0 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:40.223 Found net devices under 0000:31:00.1: cvl_0_1 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.223 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.224 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.224 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.224 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.224 23:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:23:40.484 00:23:40.484 --- 10.0.0.2 ping statistics --- 00:23:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.484 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:23:40.484 00:23:40.484 --- 10.0.0.1 ping statistics --- 00:23:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.484 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=946890 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 946890 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 946890 ']' 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.484 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.484 [2024-07-24 23:11:58.140462] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:40.484 [2024-07-24 23:11:58.140529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.484 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.484 [2024-07-24 23:11:58.219771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.744 [2024-07-24 23:11:58.295659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.744 [2024-07-24 23:11:58.295700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.744 [2024-07-24 23:11:58.295708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.744 [2024-07-24 23:11:58.295714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.744 [2024-07-24 23:11:58.295720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.744 [2024-07-24 23:11:58.295901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.744 [2024-07-24 23:11:58.296113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.744 [2024-07-24 23:11:58.296274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.744 [2024-07-24 23:11:58.296275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 [2024-07-24 23:11:58.915475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 Malloc0 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 [2024-07-24 23:11:59.014836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.314 [ 00:23:41.314 { 00:23:41.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.314 "subtype": "Discovery", 00:23:41.314 "listen_addresses": [ 00:23:41.314 { 00:23:41.314 "trtype": "TCP", 00:23:41.314 "adrfam": "IPv4", 00:23:41.314 "traddr": "10.0.0.2", 00:23:41.314 "trsvcid": "4420" 00:23:41.314 } 00:23:41.314 ], 00:23:41.314 "allow_any_host": true, 00:23:41.314 "hosts": [] 00:23:41.314 }, 00:23:41.314 { 00:23:41.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.314 "subtype": "NVMe", 00:23:41.314 "listen_addresses": [ 00:23:41.314 { 00:23:41.314 "trtype": "TCP", 00:23:41.314 "adrfam": "IPv4", 00:23:41.314 "traddr": "10.0.0.2", 00:23:41.314 "trsvcid": "4420" 00:23:41.314 } 00:23:41.314 ], 00:23:41.314 "allow_any_host": true, 00:23:41.314 "hosts": [], 00:23:41.314 "serial_number": "SPDK00000000000001", 00:23:41.314 "model_number": "SPDK bdev Controller", 00:23:41.314 "max_namespaces": 32, 00:23:41.314 "min_cntlid": 1, 00:23:41.314 "max_cntlid": 65519, 00:23:41.314 "namespaces": [ 00:23:41.314 { 00:23:41.314 "nsid": 1, 00:23:41.314 "bdev_name": "Malloc0", 00:23:41.314 "name": "Malloc0", 00:23:41.314 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:41.314 "eui64": "ABCDEF0123456789", 00:23:41.314 "uuid": "0828b14c-79f1-45e0-943f-0c1e9190a950" 00:23:41.314 } 00:23:41.314 ] 00:23:41.314 } 00:23:41.314 ] 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.314 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:41.314 [2024-07-24 23:11:59.076678] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:41.314 [2024-07-24 23:11:59.076720] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947239 ] 00:23:41.314 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.577 [2024-07-24 23:11:59.110410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:41.577 [2024-07-24 23:11:59.110451] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:41.577 [2024-07-24 23:11:59.110456] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:41.577 [2024-07-24 23:11:59.110468] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:41.577 [2024-07-24 23:11:59.110476] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:41.577 [2024-07-24 23:11:59.110941] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:41.577 [2024-07-24 23:11:59.110968] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14b6ec0 0 00:23:41.577 [2024-07-24 23:11:59.121758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:41.577 [2024-07-24 23:11:59.121772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:41.577 [2024-07-24 23:11:59.121777] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:41.577 [2024-07-24 23:11:59.121780] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:41.577 [2024-07-24 23:11:59.121819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.121824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.121828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.577 [2024-07-24 23:11:59.121841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:41.577 [2024-07-24 23:11:59.121857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.577 [2024-07-24 23:11:59.129761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.577 [2024-07-24 23:11:59.129770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.577 [2024-07-24 23:11:59.129773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.129778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.577 [2024-07-24 23:11:59.129789] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:41.577 [2024-07-24 23:11:59.129796] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:41.577 [2024-07-24 23:11:59.129801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:41.577 [2024-07-24 23:11:59.129813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.129817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.129821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.577 [2024-07-24 23:11:59.129831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.577 [2024-07-24 23:11:59.129844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.577 [2024-07-24 23:11:59.130069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.577 [2024-07-24 23:11:59.130075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.577 [2024-07-24 23:11:59.130079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.577 [2024-07-24 23:11:59.130090] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:41.577 [2024-07-24 23:11:59.130097] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:41.577 [2024-07-24 23:11:59.130104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.577 [2024-07-24 23:11:59.130118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.577 [2024-07-24 23:11:59.130128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.577 [2024-07-24 23:11:59.130329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.577 [2024-07-24 23:11:59.130335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.577 [2024-07-24 23:11:59.130339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.577 [2024-07-24 23:11:59.130347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:41.577 [2024-07-24 23:11:59.130355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:41.577 [2024-07-24 23:11:59.130361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.577 [2024-07-24 23:11:59.130375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.577 [2024-07-24 23:11:59.130385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.577 [2024-07-24 23:11:59.130568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.577 [2024-07-24 23:11:59.130574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.577 [2024-07-24 23:11:59.130577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.577 [2024-07-24 23:11:59.130586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:41.577 [2024-07-24 23:11:59.130594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.577 [2024-07-24 23:11:59.130608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.577 [2024-07-24 23:11:59.130618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.577 [2024-07-24 23:11:59.130816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.577 [2024-07-24 23:11:59.130823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.577 [2024-07-24 23:11:59.130827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.577 [2024-07-24 23:11:59.130835] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:41.577 [2024-07-24 23:11:59.130840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:41.577 [2024-07-24 23:11:59.130847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:41.577 [2024-07-24 23:11:59.130952] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:41.577 [2024-07-24 23:11:59.130957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:41.577 [2024-07-24 23:11:59.130965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.577 [2024-07-24 23:11:59.130969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.130972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.130979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.578 [2024-07-24 23:11:59.130989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.578 [2024-07-24 23:11:59.131211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.578 [2024-07-24 23:11:59.131217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.578 [2024-07-24 23:11:59.131220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.578 [2024-07-24 23:11:59.131229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:41.578 [2024-07-24 23:11:59.131238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.131251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.578 [2024-07-24 23:11:59.131261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.578 [2024-07-24 23:11:59.131492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.578 [2024-07-24 23:11:59.131498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.578 [2024-07-24 23:11:59.131501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.578 [2024-07-24 23:11:59.131509] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:41.578 [2024-07-24 23:11:59.131514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.131521] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:41.578 [2024-07-24 23:11:59.131528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.131538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.131549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.578 [2024-07-24 23:11:59.131559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.578 [2024-07-24 23:11:59.131794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.578 [2024-07-24 23:11:59.131801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.578 [2024-07-24 23:11:59.131804] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131808] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b6ec0): datao=0, datal=4096, cccid=0 00:23:41.578 [2024-07-24 23:11:59.131813] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1539e40) on tqpair(0x14b6ec0): expected_datao=0, payload_size=4096 00:23:41.578 [2024-07-24 23:11:59.131817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131825] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131829] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.578 [2024-07-24 23:11:59.131980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.578 [2024-07-24 23:11:59.131983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.131987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.578 [2024-07-24 23:11:59.131994] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:41.578 [2024-07-24 23:11:59.131999] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:41.578 [2024-07-24 23:11:59.132003] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:41.578 [2024-07-24 23:11:59.132008] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:41.578 [2024-07-24 23:11:59.132013] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:41.578 [2024-07-24 23:11:59.132017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.132025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.132034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.578 [2024-07-24 23:11:59.132060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.578 [2024-07-24 23:11:59.132271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.578 [2024-07-24 23:11:59.132277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.578 [2024-07-24 23:11:59.132280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.578 [2024-07-24 23:11:59.132291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.578 [2024-07-24 23:11:59.132312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.578 [2024-07-24 23:11:59.132331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.578 [2024-07-24 23:11:59.132349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.578 [2024-07-24 23:11:59.132366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.132376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:41.578 [2024-07-24 23:11:59.132382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.578 [2024-07-24 23:11:59.132403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539e40, cid 0, qid 0 00:23:41.578 [2024-07-24 23:11:59.132408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1539fc0, cid 1, qid 0 00:23:41.578 [2024-07-24 23:11:59.132413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a140, cid 2, qid 0 00:23:41.578 [2024-07-24 23:11:59.132418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a2c0, cid 3, qid 0 00:23:41.578 [2024-07-24 23:11:59.132422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a440, cid 4, qid 0 00:23:41.578 [2024-07-24 23:11:59.132665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.578 [2024-07-24 23:11:59.132672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.578 [2024-07-24 23:11:59.132675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a440) on tqpair=0x14b6ec0 00:23:41.578 [2024-07-24 23:11:59.132684] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:41.578 [2024-07-24 23:11:59.132689] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:41.578 [2024-07-24 23:11:59.132698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b6ec0) 00:23:41.578 [2024-07-24 23:11:59.132710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.578 [2024-07-24 23:11:59.132720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a440, cid 4, qid 0 00:23:41.578 [2024-07-24 23:11:59.132980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.578 [2024-07-24 23:11:59.132988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.578 [2024-07-24 23:11:59.132991] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.132995] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b6ec0): datao=0, datal=4096, cccid=4 00:23:41.578 [2024-07-24 23:11:59.132999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153a440) on tqpair(0x14b6ec0): expected_datao=0, payload_size=4096 00:23:41.578 [2024-07-24 23:11:59.133003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.133010] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.578 [2024-07-24 23:11:59.133014] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.173954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.579 [2024-07-24 23:11:59.173963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.579 [2024-07-24 23:11:59.173966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.173970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a440) on tqpair=0x14b6ec0 00:23:41.579 [2024-07-24 23:11:59.173981] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:41.579 [2024-07-24 23:11:59.174005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b6ec0) 00:23:41.579 [2024-07-24 23:11:59.174017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.579 [2024-07-24 23:11:59.174024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14b6ec0) 00:23:41.579 [2024-07-24 23:11:59.174037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.579 [2024-07-24 23:11:59.174051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a440, cid 4, qid 0 00:23:41.579 [2024-07-24 23:11:59.174057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a5c0, cid 5, qid 0 00:23:41.579 [2024-07-24 23:11:59.174241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.579 [2024-07-24 23:11:59.174248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.579 [2024-07-24 23:11:59.174251] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174255] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b6ec0): datao=0, datal=1024, cccid=4 00:23:41.579 [2024-07-24 23:11:59.174259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153a440) on tqpair(0x14b6ec0): expected_datao=0, payload_size=1024 00:23:41.579 [2024-07-24 23:11:59.174263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174270] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174273] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.579 [2024-07-24 23:11:59.174285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.579 [2024-07-24 23:11:59.174288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.174292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a5c0) on tqpair=0x14b6ec0 00:23:41.579 [2024-07-24 23:11:59.218758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.579 [2024-07-24 23:11:59.218767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.579 [2024-07-24 23:11:59.218770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.218774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a440) on tqpair=0x14b6ec0 00:23:41.579 [2024-07-24 23:11:59.218790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.218794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b6ec0) 00:23:41.579 [2024-07-24 23:11:59.218801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.579 [2024-07-24 23:11:59.218817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a440, cid 4, qid 0 00:23:41.579 [2024-07-24 23:11:59.219018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.579 [2024-07-24 23:11:59.219025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.579 [2024-07-24 23:11:59.219028] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219032] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b6ec0): datao=0, datal=3072, cccid=4 00:23:41.579 [2024-07-24 23:11:59.219036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153a440) on tqpair(0x14b6ec0): expected_datao=0, payload_size=3072 00:23:41.579 [2024-07-24 23:11:59.219040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219047] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219051] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.579 [2024-07-24 23:11:59.219219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.579 [2024-07-24 23:11:59.219223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a440) on tqpair=0x14b6ec0 00:23:41.579 [2024-07-24 23:11:59.219234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b6ec0) 00:23:41.579 [2024-07-24 23:11:59.219244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.579 [2024-07-24 23:11:59.219258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a440, cid 4, qid 0 00:23:41.579 [2024-07-24 23:11:59.219512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.579 [2024-07-24 23:11:59.219518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.579 [2024-07-24 23:11:59.219521] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219525] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b6ec0): datao=0, datal=8, cccid=4 00:23:41.579 [2024-07-24 23:11:59.219529] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x153a440) on tqpair(0x14b6ec0): expected_datao=0, payload_size=8 00:23:41.579 [2024-07-24 23:11:59.219533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219540] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.219543] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.260954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.579 [2024-07-24 23:11:59.260963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.579 [2024-07-24 23:11:59.260967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.579 [2024-07-24 23:11:59.260971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a440) on tqpair=0x14b6ec0 00:23:41.579 ===================================================== 00:23:41.579 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:41.579 ===================================================== 00:23:41.579 Controller Capabilities/Features 00:23:41.579 ================================ 00:23:41.579 Vendor ID: 0000 00:23:41.579 Subsystem Vendor ID: 0000 00:23:41.579 Serial Number: .................... 00:23:41.579 Model Number: ........................................ 00:23:41.579 Firmware Version: 24.09 00:23:41.579 Recommended Arb Burst: 0 00:23:41.579 IEEE OUI Identifier: 00 00 00 00:23:41.579 Multi-path I/O 00:23:41.579 May have multiple subsystem ports: No 00:23:41.579 May have multiple controllers: No 00:23:41.579 Associated with SR-IOV VF: No 00:23:41.579 Max Data Transfer Size: 131072 00:23:41.579 Max Number of Namespaces: 0 00:23:41.579 Max Number of I/O Queues: 1024 00:23:41.579 NVMe Specification Version (VS): 1.3 00:23:41.579 NVMe Specification Version (Identify): 1.3 00:23:41.579 Maximum Queue Entries: 128 00:23:41.579 Contiguous Queues Required: Yes 00:23:41.579 Arbitration Mechanisms Supported 00:23:41.579 Weighted Round Robin: Not Supported 00:23:41.579 Vendor Specific: Not Supported 00:23:41.579 Reset Timeout: 15000 ms 00:23:41.579 Doorbell Stride: 4 bytes 00:23:41.579 NVM Subsystem Reset: Not Supported 00:23:41.579 Command Sets Supported 00:23:41.579 NVM Command Set: Supported 00:23:41.579 Boot Partition: Not Supported 00:23:41.579 Memory Page Size Minimum: 4096 bytes 00:23:41.579 Memory Page Size Maximum: 4096 bytes 00:23:41.579 Persistent Memory Region: Not Supported 00:23:41.579 Optional Asynchronous Events Supported 00:23:41.579 Namespace Attribute Notices: Not Supported 00:23:41.579 Firmware Activation Notices: Not Supported 00:23:41.579 ANA Change Notices: Not Supported 00:23:41.579 PLE Aggregate Log Change Notices: Not Supported 00:23:41.579 LBA Status Info Alert Notices: Not Supported 00:23:41.579 EGE Aggregate Log Change Notices: Not Supported 00:23:41.579 Normal NVM Subsystem Shutdown event: Not Supported 00:23:41.579 Zone Descriptor Change Notices: Not Supported 00:23:41.579 Discovery Log Change Notices: Supported 00:23:41.579 Controller Attributes 00:23:41.579 128-bit Host Identifier: Not Supported 00:23:41.579 Non-Operational Permissive Mode: Not Supported 00:23:41.579 NVM Sets: Not Supported 00:23:41.579 Read Recovery Levels: Not Supported 00:23:41.579 Endurance Groups: Not Supported 00:23:41.579 Predictable Latency Mode: Not Supported 00:23:41.579 Traffic Based Keep ALive: Not Supported 00:23:41.579 Namespace Granularity: Not Supported 00:23:41.579 SQ Associations: Not Supported 00:23:41.579 UUID List: Not Supported 00:23:41.579 Multi-Domain Subsystem: Not Supported 00:23:41.579 Fixed Capacity Management: Not Supported 00:23:41.579 Variable Capacity Management: Not Supported 00:23:41.579 Delete Endurance Group: Not Supported 00:23:41.579 Delete NVM Set: Not Supported 00:23:41.579 Extended LBA Formats Supported: Not Supported 00:23:41.579 Flexible Data Placement Supported: Not Supported 00:23:41.579 00:23:41.579 Controller Memory Buffer Support 00:23:41.579 ================================ 00:23:41.579 Supported: No 00:23:41.579 00:23:41.579 Persistent Memory Region Support 00:23:41.579 ================================ 00:23:41.579 Supported: No 00:23:41.579 00:23:41.579 Admin Command Set Attributes 00:23:41.579 ============================ 00:23:41.579 Security Send/Receive: Not Supported 00:23:41.579 Format NVM: Not Supported 00:23:41.579 Firmware Activate/Download: Not Supported 00:23:41.579 Namespace Management: Not Supported 00:23:41.580 Device Self-Test: Not Supported 00:23:41.580 Directives: Not Supported 00:23:41.580 NVMe-MI: Not Supported 00:23:41.580 Virtualization Management: Not Supported 00:23:41.580 Doorbell Buffer Config: Not Supported 00:23:41.580 Get LBA Status Capability: Not Supported 00:23:41.580 Command & Feature Lockdown Capability: Not Supported 00:23:41.580 Abort Command Limit: 1 00:23:41.580 Async Event Request Limit: 4 00:23:41.580 Number of Firmware Slots: N/A 00:23:41.580 Firmware Slot 1 Read-Only: N/A 00:23:41.580 Firmware Activation Without Reset: N/A 00:23:41.580 Multiple Update Detection Support: N/A 00:23:41.580 Firmware Update Granularity: No Information Provided 00:23:41.580 Per-Namespace SMART Log: No 00:23:41.580 Asymmetric Namespace Access Log Page: Not Supported 00:23:41.580 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:41.580 Command Effects Log Page: Not Supported 00:23:41.580 Get Log Page Extended Data: Supported 00:23:41.580 Telemetry Log Pages: Not Supported 00:23:41.580 Persistent Event Log Pages: Not Supported 00:23:41.580 Supported Log Pages Log Page: May Support 00:23:41.580 Commands Supported & Effects Log Page: Not Supported 00:23:41.580 Feature Identifiers & Effects Log Page:May Support 00:23:41.580 NVMe-MI Commands & Effects Log Page: May Support 00:23:41.580 Data Area 4 for Telemetry Log: Not Supported 00:23:41.580 Error Log Page Entries Supported: 128 00:23:41.580 Keep Alive: Not Supported 00:23:41.580 00:23:41.580 NVM Command Set Attributes 00:23:41.580 ========================== 00:23:41.580 Submission Queue Entry Size 00:23:41.580 Max: 1 00:23:41.580 Min: 1 00:23:41.580 Completion Queue Entry Size 00:23:41.580 Max: 1 00:23:41.580 Min: 1 00:23:41.580 Number of Namespaces: 0 00:23:41.580 Compare Command: Not Supported 00:23:41.580 Write Uncorrectable Command: Not Supported 00:23:41.580 Dataset Management Command: Not Supported 00:23:41.580 Write Zeroes Command: Not Supported 00:23:41.580 Set Features Save Field: Not Supported 00:23:41.580 Reservations: Not Supported 00:23:41.580 Timestamp: Not Supported 00:23:41.580 Copy: Not Supported 00:23:41.580 Volatile Write Cache: Not Present 00:23:41.580 Atomic Write Unit (Normal): 1 00:23:41.580 Atomic Write Unit (PFail): 1 00:23:41.580 Atomic Compare & Write Unit: 1 00:23:41.580 Fused Compare & Write: Supported 00:23:41.580 Scatter-Gather List 00:23:41.580 SGL Command Set: Supported 00:23:41.580 SGL Keyed: Supported 00:23:41.580 SGL Bit Bucket Descriptor: Not Supported 00:23:41.580 SGL Metadata Pointer: Not Supported 00:23:41.580 Oversized SGL: Not Supported 00:23:41.580 SGL Metadata Address: Not Supported 00:23:41.580 SGL Offset: Supported 00:23:41.580 Transport SGL Data Block: Not Supported 00:23:41.580 Replay Protected Memory Block: Not Supported 00:23:41.580 00:23:41.580 Firmware Slot Information 00:23:41.580 ========================= 00:23:41.580 Active slot: 0 00:23:41.580 00:23:41.580 00:23:41.580 Error Log 00:23:41.580 ========= 00:23:41.580 00:23:41.580 Active Namespaces 00:23:41.580 ================= 00:23:41.580 Discovery Log Page 00:23:41.580 ================== 00:23:41.580 Generation Counter: 2 00:23:41.580 Number of Records: 2 00:23:41.580 Record Format: 0 00:23:41.580 00:23:41.580 Discovery Log Entry 0 00:23:41.580 ---------------------- 00:23:41.580 Transport Type: 3 (TCP) 00:23:41.580 Address Family: 1 (IPv4) 00:23:41.580 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:41.580 Entry Flags: 00:23:41.580 Duplicate Returned Information: 1 00:23:41.580 Explicit Persistent Connection Support for Discovery: 1 00:23:41.580 Transport Requirements: 00:23:41.580 Secure Channel: Not Required 00:23:41.580 Port ID: 0 (0x0000) 00:23:41.580 Controller ID: 65535 (0xffff) 00:23:41.580 Admin Max SQ Size: 128 00:23:41.580 Transport Service Identifier: 4420 00:23:41.580 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:41.580 Transport Address: 10.0.0.2 00:23:41.580 Discovery Log Entry 1 00:23:41.580 ---------------------- 00:23:41.580 Transport Type: 3 (TCP) 00:23:41.580 Address Family: 1 (IPv4) 00:23:41.580 Subsystem Type: 2 (NVM Subsystem) 00:23:41.580 Entry Flags: 00:23:41.580 Duplicate Returned Information: 0 00:23:41.580 Explicit Persistent Connection Support for Discovery: 0 00:23:41.580 Transport Requirements: 00:23:41.580 Secure Channel: Not Required 00:23:41.580 Port ID: 0 (0x0000) 00:23:41.580 Controller ID: 65535 (0xffff) 00:23:41.580 Admin Max SQ Size: 128 00:23:41.580 Transport Service Identifier: 4420 00:23:41.580 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:41.580 Transport Address: 10.0.0.2 [2024-07-24 23:11:59.261053] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:41.580 [2024-07-24 23:11:59.261064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539e40) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.261071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.580 [2024-07-24 23:11:59.261076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1539fc0) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.261081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.580 [2024-07-24 23:11:59.261086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a140) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.261090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.580 [2024-07-24 23:11:59.261095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a2c0) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.261099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.580 [2024-07-24 23:11:59.261109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.261113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.261117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b6ec0) 00:23:41.580 [2024-07-24 23:11:59.261125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.580 [2024-07-24 23:11:59.261138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a2c0, cid 3, qid 0 00:23:41.580 [2024-07-24 23:11:59.261237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.580 [2024-07-24 23:11:59.261244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.580 [2024-07-24 23:11:59.261247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.261251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a2c0) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.261257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.261261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.261264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b6ec0) 00:23:41.580 [2024-07-24 23:11:59.261271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.580 [2024-07-24 23:11:59.261284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a2c0, cid 3, qid 0 00:23:41.580 [2024-07-24 23:11:59.264757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.580 [2024-07-24 23:11:59.264766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.580 [2024-07-24 23:11:59.264769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.264773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a2c0) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.264778] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:41.580 [2024-07-24 23:11:59.264782] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:41.580 [2024-07-24 23:11:59.264792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.264796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.264799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b6ec0) 00:23:41.580 [2024-07-24 23:11:59.264806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.580 [2024-07-24 23:11:59.264818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x153a2c0, cid 3, qid 0 00:23:41.580 [2024-07-24 23:11:59.265009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.580 [2024-07-24 23:11:59.265016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.580 [2024-07-24 23:11:59.265019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.580 [2024-07-24 23:11:59.265023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x153a2c0) on tqpair=0x14b6ec0 00:23:41.580 [2024-07-24 23:11:59.265030] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:23:41.580 00:23:41.580 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:41.581 [2024-07-24 23:11:59.302954] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:41.581 [2024-07-24 23:11:59.302996] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947241 ] 00:23:41.581 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.581 [2024-07-24 23:11:59.335518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:41.581 [2024-07-24 23:11:59.335561] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:41.581 [2024-07-24 23:11:59.335566] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:41.581 [2024-07-24 23:11:59.335578] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:41.581 [2024-07-24 23:11:59.335586] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:41.581 [2024-07-24 23:11:59.338773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:41.581 [2024-07-24 23:11:59.338798] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb57ec0 0 00:23:41.581 [2024-07-24 23:11:59.346757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:41.581 [2024-07-24 23:11:59.346770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:41.581 [2024-07-24 23:11:59.346775] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:41.581 [2024-07-24 23:11:59.346778] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:41.581 [2024-07-24 23:11:59.346810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.346816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.346819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.346831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:41.581 [2024-07-24 23:11:59.346847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.354763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.354773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.354776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.354781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.354792] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:41.581 [2024-07-24 23:11:59.354798] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:41.581 [2024-07-24 23:11:59.354803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:41.581 [2024-07-24 23:11:59.354818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.354822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.354826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.354833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.581 [2024-07-24 23:11:59.354846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.355071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.355078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.355081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.355092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:41.581 [2024-07-24 23:11:59.355099] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:41.581 [2024-07-24 23:11:59.355106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.355120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.581 [2024-07-24 23:11:59.355131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.355349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.355356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.355359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.355368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:41.581 [2024-07-24 23:11:59.355376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.355382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.355396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.581 [2024-07-24 23:11:59.355406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.355613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.355619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.355623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.355631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.355641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.355657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.581 [2024-07-24 23:11:59.355667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.355857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.355864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.355867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.355871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.355876] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:41.581 [2024-07-24 23:11:59.355880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.355888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.355993] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:41.581 [2024-07-24 23:11:59.355997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.356004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.356008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.356011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.581 [2024-07-24 23:11:59.356018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.581 [2024-07-24 23:11:59.356029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.581 [2024-07-24 23:11:59.356233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.581 [2024-07-24 23:11:59.356239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.581 [2024-07-24 23:11:59.356242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.356246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.581 [2024-07-24 23:11:59.356251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:41.581 [2024-07-24 23:11:59.356259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.581 [2024-07-24 23:11:59.356263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.356273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.582 [2024-07-24 23:11:59.356283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.582 [2024-07-24 23:11:59.356491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.582 [2024-07-24 23:11:59.356498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.582 [2024-07-24 23:11:59.356501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.582 [2024-07-24 23:11:59.356509] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:41.582 [2024-07-24 23:11:59.356514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.356524] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:41.582 [2024-07-24 23:11:59.356531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.356540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.356550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.582 [2024-07-24 23:11:59.356560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.582 [2024-07-24 23:11:59.356791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.582 [2024-07-24 23:11:59.356798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.582 [2024-07-24 23:11:59.356802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356806] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=4096, cccid=0 00:23:41.582 [2024-07-24 23:11:59.356810] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdae40) on tqpair(0xb57ec0): expected_datao=0, payload_size=4096 00:23:41.582 [2024-07-24 23:11:59.356815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356822] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356826] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.356992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.582 [2024-07-24 23:11:59.356999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.582 [2024-07-24 23:11:59.357002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.582 [2024-07-24 23:11:59.357013] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:41.582 [2024-07-24 23:11:59.357018] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:41.582 [2024-07-24 23:11:59.357022] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:41.582 [2024-07-24 23:11:59.357026] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:41.582 [2024-07-24 23:11:59.357030] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:41.582 [2024-07-24 23:11:59.357035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.582 [2024-07-24 23:11:59.357077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.582 [2024-07-24 23:11:59.357253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.582 [2024-07-24 23:11:59.357260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.582 [2024-07-24 23:11:59.357264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.582 [2024-07-24 23:11:59.357276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.582 [2024-07-24 23:11:59.357296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.582 [2024-07-24 23:11:59.357315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.582 [2024-07-24 23:11:59.357333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.582 [2024-07-24 23:11:59.357351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.582 [2024-07-24 23:11:59.357389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdae40, cid 0, qid 0 00:23:41.582 [2024-07-24 23:11:59.357394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdafc0, cid 1, qid 0 00:23:41.582 [2024-07-24 23:11:59.357399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb140, cid 2, qid 0 00:23:41.582 [2024-07-24 23:11:59.357404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.582 [2024-07-24 23:11:59.357409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.582 [2024-07-24 23:11:59.357653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.582 [2024-07-24 23:11:59.357659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.582 [2024-07-24 23:11:59.357663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.582 [2024-07-24 23:11:59.357671] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:41.582 [2024-07-24 23:11:59.357676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.582 [2024-07-24 23:11:59.357713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.582 [2024-07-24 23:11:59.357723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.582 [2024-07-24 23:11:59.357894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.582 [2024-07-24 23:11:59.357901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.582 [2024-07-24 23:11:59.357905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.582 [2024-07-24 23:11:59.357972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:41.582 [2024-07-24 23:11:59.357988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.582 [2024-07-24 23:11:59.357992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.583 [2024-07-24 23:11:59.357998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.583 [2024-07-24 23:11:59.358009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.583 [2024-07-24 23:11:59.358247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.583 [2024-07-24 23:11:59.358254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.583 [2024-07-24 23:11:59.358257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358261] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=4096, cccid=4 00:23:41.583 [2024-07-24 23:11:59.358265] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb440) on tqpair(0xb57ec0): expected_datao=0, payload_size=4096 00:23:41.583 [2024-07-24 23:11:59.358269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358276] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358280] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.583 [2024-07-24 23:11:59.358439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.583 [2024-07-24 23:11:59.358442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.583 [2024-07-24 23:11:59.358459] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:41.583 [2024-07-24 23:11:59.358467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:41.583 [2024-07-24 23:11:59.358475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:41.583 [2024-07-24 23:11:59.358482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.583 [2024-07-24 23:11:59.358486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.583 [2024-07-24 23:11:59.358496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.583 [2024-07-24 23:11:59.358507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.583 [2024-07-24 23:11:59.358737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.583 [2024-07-24 23:11:59.358743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.583 [2024-07-24 23:11:59.358747] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.844 [2024-07-24 23:11:59.362755] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=4096, cccid=4 00:23:41.845 [2024-07-24 23:11:59.362762] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb440) on tqpair(0xb57ec0): expected_datao=0, payload_size=4096 00:23:41.845 [2024-07-24 23:11:59.362766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.362777] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.362781] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.362788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.362793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.362797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.362801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.362813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.362822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.362829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.362832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.362839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.362850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.845 [2024-07-24 23:11:59.363047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.845 [2024-07-24 23:11:59.363055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.845 [2024-07-24 23:11:59.363059] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363064] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=4096, cccid=4 00:23:41.845 [2024-07-24 23:11:59.363070] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb440) on tqpair(0xb57ec0): expected_datao=0, payload_size=4096 00:23:41.845 [2024-07-24 23:11:59.363075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363111] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363116] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.363300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.363304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.363314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363355] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:41.845 [2024-07-24 23:11:59.363360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:41.845 [2024-07-24 23:11:59.363365] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:41.845 [2024-07-24 23:11:59.363378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.363388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.363395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.363408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.845 [2024-07-24 23:11:59.363421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.845 [2024-07-24 23:11:59.363426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb5c0, cid 5, qid 0 00:23:41.845 [2024-07-24 23:11:59.363622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.363628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.363631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.363642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.363648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.363651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb5c0) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.363664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.363674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.363683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb5c0, cid 5, qid 0 00:23:41.845 [2024-07-24 23:11:59.363902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.363909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.363912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb5c0) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.363925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.363929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.363938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.363948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb5c0, cid 5, qid 0 00:23:41.845 [2024-07-24 23:11:59.364163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.364170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.364173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb5c0) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.364186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.364196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.364206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb5c0, cid 5, qid 0 00:23:41.845 [2024-07-24 23:11:59.364433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.845 [2024-07-24 23:11:59.364440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.845 [2024-07-24 23:11:59.364444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb5c0) on tqpair=0xb57ec0 00:23:41.845 [2024-07-24 23:11:59.364461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.364471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.364478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.364488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.364495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.364505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.364512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb57ec0) 00:23:41.845 [2024-07-24 23:11:59.364522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.845 [2024-07-24 23:11:59.364533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb5c0, cid 5, qid 0 00:23:41.845 [2024-07-24 23:11:59.364538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb440, cid 4, qid 0 00:23:41.845 [2024-07-24 23:11:59.364542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb740, cid 6, qid 0 00:23:41.845 [2024-07-24 23:11:59.364547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb8c0, cid 7, qid 0 00:23:41.845 [2024-07-24 23:11:59.364806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.845 [2024-07-24 23:11:59.364813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.845 [2024-07-24 23:11:59.364816] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.845 [2024-07-24 23:11:59.364822] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=8192, cccid=5 00:23:41.845 [2024-07-24 23:11:59.364826] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb5c0) on tqpair(0xb57ec0): expected_datao=0, payload_size=8192 00:23:41.846 [2024-07-24 23:11:59.364830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364870] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364874] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.846 [2024-07-24 23:11:59.364886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.846 [2024-07-24 23:11:59.364889] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364892] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=512, cccid=4 00:23:41.846 [2024-07-24 23:11:59.364897] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb440) on tqpair(0xb57ec0): expected_datao=0, payload_size=512 00:23:41.846 [2024-07-24 23:11:59.364901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364910] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.846 [2024-07-24 23:11:59.364922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.846 [2024-07-24 23:11:59.364925] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364928] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=512, cccid=6 00:23:41.846 [2024-07-24 23:11:59.364932] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb740) on tqpair(0xb57ec0): expected_datao=0, payload_size=512 00:23:41.846 [2024-07-24 23:11:59.364937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364943] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364946] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:41.846 [2024-07-24 23:11:59.364957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:41.846 [2024-07-24 23:11:59.364961] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.364964] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb57ec0): datao=0, datal=4096, cccid=7 00:23:41.846 [2024-07-24 23:11:59.364968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbdb8c0) on tqpair(0xb57ec0): expected_datao=0, payload_size=4096 00:23:41.846 [2024-07-24 23:11:59.364972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365013] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365017] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.846 [2024-07-24 23:11:59.365218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.846 [2024-07-24 23:11:59.365221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb5c0) on tqpair=0xb57ec0 00:23:41.846 [2024-07-24 23:11:59.365236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.846 [2024-07-24 23:11:59.365242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.846 [2024-07-24 23:11:59.365246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb440) on tqpair=0xb57ec0 00:23:41.846 [2024-07-24 23:11:59.365259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.846 [2024-07-24 23:11:59.365266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.846 [2024-07-24 23:11:59.365269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb740) on tqpair=0xb57ec0 00:23:41.846 [2024-07-24 23:11:59.365280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.846 [2024-07-24 23:11:59.365286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.846 [2024-07-24 23:11:59.365289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.846 [2024-07-24 23:11:59.365293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb8c0) on tqpair=0xb57ec0 00:23:41.846 ===================================================== 00:23:41.846 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.846 ===================================================== 00:23:41.846 Controller Capabilities/Features 00:23:41.846 ================================ 00:23:41.846 Vendor ID: 8086 00:23:41.846 Subsystem Vendor ID: 8086 00:23:41.846 Serial Number: SPDK00000000000001 00:23:41.846 Model Number: SPDK bdev Controller 00:23:41.846 Firmware Version: 24.09 00:23:41.846 Recommended Arb Burst: 6 00:23:41.846 IEEE OUI Identifier: e4 d2 5c 00:23:41.846 Multi-path I/O 00:23:41.846 May have multiple subsystem ports: Yes 00:23:41.846 May have multiple controllers: Yes 00:23:41.846 Associated with SR-IOV VF: No 00:23:41.846 Max Data Transfer Size: 131072 00:23:41.846 Max Number of Namespaces: 32 00:23:41.846 Max Number of I/O Queues: 127 00:23:41.846 NVMe Specification Version (VS): 1.3 00:23:41.846 NVMe Specification Version (Identify): 1.3 00:23:41.846 Maximum Queue Entries: 128 00:23:41.846 Contiguous Queues Required: Yes 00:23:41.846 Arbitration Mechanisms Supported 00:23:41.846 Weighted Round Robin: Not Supported 00:23:41.846 Vendor Specific: Not Supported 00:23:41.846 Reset Timeout: 15000 ms 00:23:41.846 Doorbell Stride: 4 bytes 00:23:41.846 NVM Subsystem Reset: Not Supported 00:23:41.846 Command Sets Supported 00:23:41.846 NVM Command Set: Supported 00:23:41.846 Boot Partition: Not Supported 00:23:41.846 Memory Page Size Minimum: 4096 bytes 00:23:41.846 Memory Page Size Maximum: 4096 bytes 00:23:41.846 Persistent Memory Region: Not Supported 00:23:41.846 Optional Asynchronous Events Supported 00:23:41.846 Namespace Attribute Notices: Supported 00:23:41.846 Firmware Activation Notices: Not Supported 00:23:41.846 ANA Change Notices: Not Supported 00:23:41.846 PLE Aggregate Log Change Notices: Not Supported 00:23:41.846 LBA Status Info Alert Notices: Not Supported 00:23:41.846 EGE Aggregate Log Change Notices: Not Supported 00:23:41.846 Normal NVM Subsystem Shutdown event: Not Supported 00:23:41.846 Zone Descriptor Change Notices: Not Supported 00:23:41.846 Discovery Log Change Notices: Not Supported 00:23:41.846 Controller Attributes 00:23:41.846 128-bit Host Identifier: Supported 00:23:41.846 Non-Operational Permissive Mode: Not Supported 00:23:41.846 NVM Sets: Not Supported 00:23:41.846 Read Recovery Levels: Not Supported 00:23:41.846 Endurance Groups: Not Supported 00:23:41.846 Predictable Latency Mode: Not Supported 00:23:41.846 Traffic Based Keep ALive: Not Supported 00:23:41.846 Namespace Granularity: Not Supported 00:23:41.846 SQ Associations: Not Supported 00:23:41.846 UUID List: Not Supported 00:23:41.846 Multi-Domain Subsystem: Not Supported 00:23:41.846 Fixed Capacity Management: Not Supported 00:23:41.846 Variable Capacity Management: Not Supported 00:23:41.846 Delete Endurance Group: Not Supported 00:23:41.846 Delete NVM Set: Not Supported 00:23:41.846 Extended LBA Formats Supported: Not Supported 00:23:41.846 Flexible Data Placement Supported: Not Supported 00:23:41.846 00:23:41.846 Controller Memory Buffer Support 00:23:41.846 ================================ 00:23:41.846 Supported: No 00:23:41.846 00:23:41.846 Persistent Memory Region Support 00:23:41.846 ================================ 00:23:41.846 Supported: No 00:23:41.846 00:23:41.846 Admin Command Set Attributes 00:23:41.846 ============================ 00:23:41.846 Security Send/Receive: Not Supported 00:23:41.846 Format NVM: Not Supported 00:23:41.846 Firmware Activate/Download: Not Supported 00:23:41.846 Namespace Management: Not Supported 00:23:41.846 Device Self-Test: Not Supported 00:23:41.846 Directives: Not Supported 00:23:41.846 NVMe-MI: Not Supported 00:23:41.846 Virtualization Management: Not Supported 00:23:41.846 Doorbell Buffer Config: Not Supported 00:23:41.846 Get LBA Status Capability: Not Supported 00:23:41.846 Command & Feature Lockdown Capability: Not Supported 00:23:41.846 Abort Command Limit: 4 00:23:41.846 Async Event Request Limit: 4 00:23:41.846 Number of Firmware Slots: N/A 00:23:41.846 Firmware Slot 1 Read-Only: N/A 00:23:41.846 Firmware Activation Without Reset: N/A 00:23:41.846 Multiple Update Detection Support: N/A 00:23:41.846 Firmware Update Granularity: No Information Provided 00:23:41.846 Per-Namespace SMART Log: No 00:23:41.846 Asymmetric Namespace Access Log Page: Not Supported 00:23:41.846 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:41.846 Command Effects Log Page: Supported 00:23:41.846 Get Log Page Extended Data: Supported 00:23:41.846 Telemetry Log Pages: Not Supported 00:23:41.846 Persistent Event Log Pages: Not Supported 00:23:41.846 Supported Log Pages Log Page: May Support 00:23:41.846 Commands Supported & Effects Log Page: Not Supported 00:23:41.846 Feature Identifiers & Effects Log Page:May Support 00:23:41.846 NVMe-MI Commands & Effects Log Page: May Support 00:23:41.846 Data Area 4 for Telemetry Log: Not Supported 00:23:41.846 Error Log Page Entries Supported: 128 00:23:41.846 Keep Alive: Supported 00:23:41.846 Keep Alive Granularity: 10000 ms 00:23:41.846 00:23:41.846 NVM Command Set Attributes 00:23:41.846 ========================== 00:23:41.846 Submission Queue Entry Size 00:23:41.846 Max: 64 00:23:41.846 Min: 64 00:23:41.846 Completion Queue Entry Size 00:23:41.846 Max: 16 00:23:41.846 Min: 16 00:23:41.846 Number of Namespaces: 32 00:23:41.847 Compare Command: Supported 00:23:41.847 Write Uncorrectable Command: Not Supported 00:23:41.847 Dataset Management Command: Supported 00:23:41.847 Write Zeroes Command: Supported 00:23:41.847 Set Features Save Field: Not Supported 00:23:41.847 Reservations: Supported 00:23:41.847 Timestamp: Not Supported 00:23:41.847 Copy: Supported 00:23:41.847 Volatile Write Cache: Present 00:23:41.847 Atomic Write Unit (Normal): 1 00:23:41.847 Atomic Write Unit (PFail): 1 00:23:41.847 Atomic Compare & Write Unit: 1 00:23:41.847 Fused Compare & Write: Supported 00:23:41.847 Scatter-Gather List 00:23:41.847 SGL Command Set: Supported 00:23:41.847 SGL Keyed: Supported 00:23:41.847 SGL Bit Bucket Descriptor: Not Supported 00:23:41.847 SGL Metadata Pointer: Not Supported 00:23:41.847 Oversized SGL: Not Supported 00:23:41.847 SGL Metadata Address: Not Supported 00:23:41.847 SGL Offset: Supported 00:23:41.847 Transport SGL Data Block: Not Supported 00:23:41.847 Replay Protected Memory Block: Not Supported 00:23:41.847 00:23:41.847 Firmware Slot Information 00:23:41.847 ========================= 00:23:41.847 Active slot: 1 00:23:41.847 Slot 1 Firmware Revision: 24.09 00:23:41.847 00:23:41.847 00:23:41.847 Commands Supported and Effects 00:23:41.847 ============================== 00:23:41.847 Admin Commands 00:23:41.847 -------------- 00:23:41.847 Get Log Page (02h): Supported 00:23:41.847 Identify (06h): Supported 00:23:41.847 Abort (08h): Supported 00:23:41.847 Set Features (09h): Supported 00:23:41.847 Get Features (0Ah): Supported 00:23:41.847 Asynchronous Event Request (0Ch): Supported 00:23:41.847 Keep Alive (18h): Supported 00:23:41.847 I/O Commands 00:23:41.847 ------------ 00:23:41.847 Flush (00h): Supported LBA-Change 00:23:41.847 Write (01h): Supported LBA-Change 00:23:41.847 Read (02h): Supported 00:23:41.847 Compare (05h): Supported 00:23:41.847 Write Zeroes (08h): Supported LBA-Change 00:23:41.847 Dataset Management (09h): Supported LBA-Change 00:23:41.847 Copy (19h): Supported LBA-Change 00:23:41.847 00:23:41.847 Error Log 00:23:41.847 ========= 00:23:41.847 00:23:41.847 Arbitration 00:23:41.847 =========== 00:23:41.847 Arbitration Burst: 1 00:23:41.847 00:23:41.847 Power Management 00:23:41.847 ================ 00:23:41.847 Number of Power States: 1 00:23:41.847 Current Power State: Power State #0 00:23:41.847 Power State #0: 00:23:41.847 Max Power: 0.00 W 00:23:41.847 Non-Operational State: Operational 00:23:41.847 Entry Latency: Not Reported 00:23:41.847 Exit Latency: Not Reported 00:23:41.847 Relative Read Throughput: 0 00:23:41.847 Relative Read Latency: 0 00:23:41.847 Relative Write Throughput: 0 00:23:41.847 Relative Write Latency: 0 00:23:41.847 Idle Power: Not Reported 00:23:41.847 Active Power: Not Reported 00:23:41.847 Non-Operational Permissive Mode: Not Supported 00:23:41.847 00:23:41.847 Health Information 00:23:41.847 ================== 00:23:41.847 Critical Warnings: 00:23:41.847 Available Spare Space: OK 00:23:41.847 Temperature: OK 00:23:41.847 Device Reliability: OK 00:23:41.847 Read Only: No 00:23:41.847 Volatile Memory Backup: OK 00:23:41.847 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:41.847 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:41.847 Available Spare: 0% 00:23:41.847 Available Spare Threshold: 0% 00:23:41.847 Life Percentage Used:[2024-07-24 23:11:59.365390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.365402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.365414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb8c0, cid 7, qid 0 00:23:41.847 [2024-07-24 23:11:59.365593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.365599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.847 [2024-07-24 23:11:59.365603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb8c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365636] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:41.847 [2024-07-24 23:11:59.365645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdae40) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.847 [2024-07-24 23:11:59.365656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdafc0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.847 [2024-07-24 23:11:59.365665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb140) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.847 [2024-07-24 23:11:59.365675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.847 [2024-07-24 23:11:59.365687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.365701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.365714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.847 [2024-07-24 23:11:59.365925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.365933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.847 [2024-07-24 23:11:59.365937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.365947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.365954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.365963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.365976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.847 [2024-07-24 23:11:59.366193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.366200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.847 [2024-07-24 23:11:59.366203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.366211] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:41.847 [2024-07-24 23:11:59.366216] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:41.847 [2024-07-24 23:11:59.366225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.366239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.366249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.847 [2024-07-24 23:11:59.366454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.366460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.847 [2024-07-24 23:11:59.366464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.366478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.366495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.366506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.847 [2024-07-24 23:11:59.366727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.366734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.847 [2024-07-24 23:11:59.366737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.366741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.847 [2024-07-24 23:11:59.370755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.370762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:41.847 [2024-07-24 23:11:59.370766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb57ec0) 00:23:41.847 [2024-07-24 23:11:59.370773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.847 [2024-07-24 23:11:59.370785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbdb2c0, cid 3, qid 0 00:23:41.847 [2024-07-24 23:11:59.370993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:41.847 [2024-07-24 23:11:59.370999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:41.848 [2024-07-24 23:11:59.371002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:41.848 [2024-07-24 23:11:59.371006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbdb2c0) on tqpair=0xb57ec0 00:23:41.848 [2024-07-24 23:11:59.371013] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:23:41.848 0% 00:23:41.848 Data Units Read: 0 00:23:41.848 Data Units Written: 0 00:23:41.848 Host Read Commands: 0 00:23:41.848 Host Write Commands: 0 00:23:41.848 Controller Busy Time: 0 minutes 00:23:41.848 Power Cycles: 0 00:23:41.848 Power On Hours: 0 hours 00:23:41.848 Unsafe Shutdowns: 0 00:23:41.848 Unrecoverable Media Errors: 0 00:23:41.848 Lifetime Error Log Entries: 0 00:23:41.848 Warning Temperature Time: 0 minutes 00:23:41.848 Critical Temperature Time: 0 minutes 00:23:41.848 00:23:41.848 Number of Queues 00:23:41.848 ================ 00:23:41.848 Number of I/O Submission Queues: 127 00:23:41.848 Number of I/O Completion Queues: 127 00:23:41.848 00:23:41.848 Active Namespaces 00:23:41.848 ================= 00:23:41.848 Namespace ID:1 00:23:41.848 Error Recovery Timeout: Unlimited 00:23:41.848 Command Set Identifier: NVM (00h) 00:23:41.848 Deallocate: Supported 00:23:41.848 Deallocated/Unwritten Error: Not Supported 00:23:41.848 Deallocated Read Value: Unknown 00:23:41.848 Deallocate in Write Zeroes: Not Supported 00:23:41.848 Deallocated Guard Field: 0xFFFF 00:23:41.848 Flush: Supported 00:23:41.848 Reservation: Supported 00:23:41.848 Namespace Sharing Capabilities: Multiple Controllers 00:23:41.848 Size (in LBAs): 131072 (0GiB) 00:23:41.848 Capacity (in LBAs): 131072 (0GiB) 00:23:41.848 Utilization (in LBAs): 131072 (0GiB) 00:23:41.848 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:41.848 EUI64: ABCDEF0123456789 00:23:41.848 UUID: 0828b14c-79f1-45e0-943f-0c1e9190a950 00:23:41.848 Thin Provisioning: Not Supported 00:23:41.848 Per-NS Atomic Units: Yes 00:23:41.848 Atomic Boundary Size (Normal): 0 00:23:41.848 Atomic Boundary Size (PFail): 0 00:23:41.848 Atomic Boundary Offset: 0 00:23:41.848 Maximum Single Source Range Length: 65535 00:23:41.848 Maximum Copy Length: 65535 00:23:41.848 Maximum Source Range Count: 1 00:23:41.848 NGUID/EUI64 Never Reused: No 00:23:41.848 Namespace Write Protected: No 00:23:41.848 Number of LBA Formats: 1 00:23:41.848 Current LBA Format: LBA Format #00 00:23:41.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:41.848 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.848 rmmod nvme_tcp 00:23:41.848 rmmod nvme_fabrics 00:23:41.848 rmmod nvme_keyring 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 946890 ']' 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 946890 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 946890 ']' 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 946890 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 946890 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 946890' 00:23:41.848 killing process with pid 946890 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 946890 00:23:41.848 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 946890 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.109 23:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:44.022 00:23:44.022 real 0m12.219s 00:23:44.022 user 0m7.900s 00:23:44.022 sys 0m6.636s 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.022 ************************************ 00:23:44.022 END TEST nvmf_identify 00:23:44.022 ************************************ 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.022 23:12:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.284 ************************************ 00:23:44.284 START TEST nvmf_perf 00:23:44.284 ************************************ 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:44.284 * Looking for test storage... 00:23:44.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.284 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.285 23:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.425 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:52.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:52.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:52.426 Found net devices under 0000:31:00.0: cvl_0_0 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:52.426 Found net devices under 0000:31:00.1: cvl_0_1 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.426 23:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:23:52.426 00:23:52.426 --- 10.0.0.2 ping statistics --- 00:23:52.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.426 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:23:52.426 00:23:52.426 --- 10.0.0.1 ping statistics --- 00:23:52.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.426 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=952294 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 952294 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 952294 ']' 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.426 23:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.687 [2024-07-24 23:12:10.224454] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:23:52.687 [2024-07-24 23:12:10.224521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.687 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.687 [2024-07-24 23:12:10.307431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.687 [2024-07-24 23:12:10.383071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.687 [2024-07-24 23:12:10.383110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.687 [2024-07-24 23:12:10.383118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.687 [2024-07-24 23:12:10.383124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.687 [2024-07-24 23:12:10.383130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.687 [2024-07-24 23:12:10.383270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.687 [2024-07-24 23:12:10.383388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.687 [2024-07-24 23:12:10.383546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.687 [2024-07-24 23:12:10.383547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.259 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.259 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:53.259 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.259 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.259 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:53.519 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.519 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:53.519 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:53.780 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:53.780 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:54.068 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:54.068 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:54.329 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:54.329 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:54.329 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:54.329 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:54.329 23:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.329 [2024-07-24 23:12:12.012961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.329 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.589 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.589 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.850 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.850 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:54.850 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.111 [2024-07-24 23:12:12.683394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.111 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.111 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:55.111 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:55.111 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:55.111 23:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:56.495 Initializing NVMe Controllers 00:23:56.495 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:56.495 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:56.495 Initialization complete. Launching workers. 00:23:56.495 ======================================================== 00:23:56.495 Latency(us) 00:23:56.495 Device Information : IOPS MiB/s Average min max 00:23:56.495 PCIE (0000:65:00.0) NSID 1 from core 0: 79656.72 311.16 401.23 13.28 7419.46 00:23:56.495 ======================================================== 00:23:56.495 Total : 79656.72 311.16 401.23 13.28 7419.46 00:23:56.495 00:23:56.495 23:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.879 Initializing NVMe Controllers 00:23:57.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:57.880 Initialization complete. Launching workers. 00:23:57.880 ======================================================== 00:23:57.880 Latency(us) 00:23:57.880 Device Information : IOPS MiB/s Average min max 00:23:57.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 131.00 0.51 7919.98 203.21 46115.58 00:23:57.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16485.38 6923.07 48887.16 00:23:57.880 ======================================================== 00:23:57.880 Total : 192.00 0.75 10641.28 203.21 48887.16 00:23:57.880 00:23:57.880 23:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.880 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.264 Initializing NVMe Controllers 00:23:59.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.264 Initialization complete. Launching workers. 00:23:59.264 ======================================================== 00:23:59.264 Latency(us) 00:23:59.264 Device Information : IOPS MiB/s Average min max 00:23:59.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10401.00 40.63 3082.01 516.77 9297.92 00:23:59.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3777.00 14.75 8510.82 5512.84 16372.89 00:23:59.264 ======================================================== 00:23:59.264 Total : 14178.00 55.38 4528.24 516.77 16372.89 00:23:59.264 00:23:59.264 23:12:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:59.264 23:12:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:59.264 23:12:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:59.264 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.807 Initializing NVMe Controllers 00:24:01.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.807 Controller IO queue size 128, less than required. 00:24:01.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.807 Controller IO queue size 128, less than required. 00:24:01.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.807 Initialization complete. Launching workers. 00:24:01.807 ======================================================== 00:24:01.808 Latency(us) 00:24:01.808 Device Information : IOPS MiB/s Average min max 00:24:01.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 937.27 234.32 140317.68 72936.87 182753.35 00:24:01.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.54 147.39 229146.73 74929.71 362474.15 00:24:01.808 ======================================================== 00:24:01.808 Total : 1526.81 381.70 174616.85 72936.87 362474.15 00:24:01.808 00:24:01.808 23:12:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:01.808 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.808 No valid NVMe controllers or AIO or URING devices found 00:24:01.808 Initializing NVMe Controllers 00:24:01.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.808 Controller IO queue size 128, less than required. 00:24:01.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.808 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:01.808 Controller IO queue size 128, less than required. 00:24:01.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.808 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:01.808 WARNING: Some requested NVMe devices were skipped 00:24:01.808 23:12:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:01.808 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.389 Initializing NVMe Controllers 00:24:04.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.389 Controller IO queue size 128, less than required. 00:24:04.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.389 Controller IO queue size 128, less than required. 00:24:04.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.389 Initialization complete. Launching workers. 00:24:04.389 00:24:04.389 ==================== 00:24:04.389 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:04.389 TCP transport: 00:24:04.389 polls: 34703 00:24:04.389 idle_polls: 11750 00:24:04.389 sock_completions: 22953 00:24:04.389 nvme_completions: 4035 00:24:04.389 submitted_requests: 6040 00:24:04.389 queued_requests: 1 00:24:04.389 00:24:04.389 ==================== 00:24:04.389 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:04.389 TCP transport: 00:24:04.389 polls: 39986 00:24:04.389 idle_polls: 14290 00:24:04.389 sock_completions: 25696 00:24:04.389 nvme_completions: 4257 00:24:04.389 submitted_requests: 6322 00:24:04.389 queued_requests: 1 00:24:04.389 ======================================================== 00:24:04.389 Latency(us) 00:24:04.389 Device Information : IOPS MiB/s Average min max 00:24:04.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1008.49 252.12 129956.01 66468.81 214049.59 00:24:04.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1063.99 266.00 124513.01 65173.07 162147.10 00:24:04.390 ======================================================== 00:24:04.390 Total : 2072.48 518.12 127161.63 65173.07 214049.59 00:24:04.390 00:24:04.390 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:04.390 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.650 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:04.650 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:04.650 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:04.650 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.651 rmmod nvme_tcp 00:24:04.651 rmmod nvme_fabrics 00:24:04.651 rmmod nvme_keyring 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 952294 ']' 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 952294 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 952294 ']' 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 952294 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952294 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952294' 00:24:04.651 killing process with pid 952294 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 952294 00:24:04.651 23:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 952294 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.195 23:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:09.109 00:24:09.109 real 0m24.621s 00:24:09.109 user 0m57.985s 00:24:09.109 sys 0m8.305s 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:09.109 ************************************ 00:24:09.109 END TEST nvmf_perf 00:24:09.109 ************************************ 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.109 ************************************ 00:24:09.109 START TEST nvmf_fio_host 00:24:09.109 ************************************ 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.109 * Looking for test storage... 00:24:09.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.109 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.110 23:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.256 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:17.257 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:17.257 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:17.257 Found net devices under 0000:31:00.0: cvl_0_0 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:17.257 Found net devices under 0000:31:00.1: cvl_0_1 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.762 ms 00:24:17.257 00:24:17.257 --- 10.0.0.2 ping statistics --- 00:24:17.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.257 rtt min/avg/max/mdev = 0.762/0.762/0.762/0.000 ms 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:24:17.257 00:24:17.257 --- 10.0.0.1 ping statistics --- 00:24:17.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.257 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=959881 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 959881 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 959881 ']' 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.257 23:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.257 [2024-07-24 23:12:34.856971] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:24:17.257 [2024-07-24 23:12:34.857057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.257 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.257 [2024-07-24 23:12:34.936295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.257 [2024-07-24 23:12:35.011367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.257 [2024-07-24 23:12:35.011405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.257 [2024-07-24 23:12:35.011412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.257 [2024-07-24 23:12:35.011419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.257 [2024-07-24 23:12:35.011425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.257 [2024-07-24 23:12:35.011567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.257 [2024-07-24 23:12:35.011680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.257 [2024-07-24 23:12:35.011822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.257 [2024-07-24 23:12:35.011822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:18.199 [2024-07-24 23:12:35.777962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.199 23:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:18.459 Malloc1 00:24:18.459 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.459 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:18.720 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.980 [2024-07-24 23:12:36.507367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.980 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.981 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.981 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:18.981 23:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:19.556 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:19.556 fio-3.35 00:24:19.556 Starting 1 thread 00:24:19.556 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.101 00:24:22.101 test: (groupid=0, jobs=1): err= 0: pid=960412: Wed Jul 24 23:12:39 2024 00:24:22.101 read: IOPS=13.6k, BW=53.0MiB/s (55.6MB/s)(106MiB/2004msec) 00:24:22.101 slat (usec): min=2, max=277, avg= 2.20, stdev= 2.38 00:24:22.101 clat (usec): min=3511, max=8794, avg=5226.47, stdev=733.00 00:24:22.101 lat (usec): min=3513, max=8796, avg=5228.67, stdev=733.05 00:24:22.101 clat percentiles (usec): 00:24:22.101 | 1.00th=[ 4113], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4752], 00:24:22.102 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:22.102 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5997], 95.00th=[ 7046], 00:24:22.102 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 8356], 99.95th=[ 8586], 00:24:22.102 | 99.99th=[ 8717] 00:24:22.102 bw ( KiB/s): min=46992, max=56736, per=99.92%, avg=54226.00, stdev=4823.20, samples=4 00:24:22.102 iops : min=11748, max=14184, avg=13556.50, stdev=1205.80, samples=4 00:24:22.102 write: IOPS=13.6k, BW=53.0MiB/s (55.5MB/s)(106MiB/2004msec); 0 zone resets 00:24:22.102 slat (usec): min=2, max=286, avg= 2.31, stdev= 1.89 00:24:22.102 clat (usec): min=2573, max=7261, avg=4149.25, stdev=610.97 00:24:22.102 lat (usec): min=2575, max=7263, avg=4151.56, stdev=611.05 00:24:22.102 clat percentiles (usec): 00:24:22.102 | 1.00th=[ 3130], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3752], 00:24:22.102 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:24:22.102 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5735], 00:24:22.102 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 6915], 00:24:22.102 | 99.99th=[ 7177] 00:24:22.102 bw ( KiB/s): min=47528, max=56648, per=100.00%, avg=54228.00, stdev=4469.55, samples=4 00:24:22.102 iops : min=11882, max=14162, avg=13557.00, stdev=1117.39, samples=4 00:24:22.102 lat (msec) : 4=22.52%, 10=77.48% 00:24:22.102 cpu : usr=66.30%, sys=28.81%, ctx=17, majf=0, minf=5 00:24:22.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:22.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:22.102 issued rwts: total=27188,27166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:22.102 00:24:22.102 Run status group 0 (all jobs): 00:24:22.102 READ: bw=53.0MiB/s (55.6MB/s), 53.0MiB/s-53.0MiB/s (55.6MB/s-55.6MB/s), io=106MiB (111MB), run=2004-2004msec 00:24:22.102 WRITE: bw=53.0MiB/s (55.5MB/s), 53.0MiB/s-53.0MiB/s (55.5MB/s-55.5MB/s), io=106MiB (111MB), run=2004-2004msec 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:22.102 23:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:22.102 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:22.102 fio-3.35 00:24:22.102 Starting 1 thread 00:24:22.102 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.647 [2024-07-24 23:12:42.222261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1047900 is same with the state(5) to be set 00:24:24.647 00:24:24.647 test: (groupid=0, jobs=1): err= 0: pid=961235: Wed Jul 24 23:12:42 2024 00:24:24.647 read: IOPS=8818, BW=138MiB/s (144MB/s)(276MiB/2003msec) 00:24:24.647 slat (usec): min=3, max=113, avg= 3.68, stdev= 1.71 00:24:24.647 clat (usec): min=1371, max=20516, avg=8960.99, stdev=2201.13 00:24:24.647 lat (usec): min=1374, max=20519, avg=8964.67, stdev=2201.31 00:24:24.647 clat percentiles (usec): 00:24:24.648 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6980], 00:24:24.648 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:24:24.648 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11994], 95.00th=[12649], 00:24:24.648 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15270], 99.95th=[15401], 00:24:24.648 | 99.99th=[15664] 00:24:24.648 bw ( KiB/s): min=62464, max=74400, per=48.95%, avg=69064.00, stdev=5246.66, samples=4 00:24:24.648 iops : min= 3904, max= 4650, avg=4316.50, stdev=327.92, samples=4 00:24:24.648 write: IOPS=5044, BW=78.8MiB/s (82.7MB/s)(141MiB/1792msec); 0 zone resets 00:24:24.648 slat (usec): min=40, max=457, avg=41.39, stdev= 9.26 00:24:24.648 clat (usec): min=2383, max=16942, avg=9709.07, stdev=1681.56 00:24:24.648 lat (usec): min=2423, max=17075, avg=9750.46, stdev=1684.05 00:24:24.648 clat percentiles (usec): 00:24:24.648 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8291], 00:24:24.648 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:24.648 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11731], 95.00th=[12649], 00:24:24.648 | 99.00th=[14222], 99.50th=[15270], 99.90th=[16581], 99.95th=[16712], 00:24:24.648 | 99.99th=[16909] 00:24:24.648 bw ( KiB/s): min=64672, max=77888, per=88.93%, avg=71776.00, stdev=5755.55, samples=4 00:24:24.648 iops : min= 4042, max= 4868, avg=4486.00, stdev=359.72, samples=4 00:24:24.648 lat (msec) : 2=0.04%, 4=0.35%, 10=64.80%, 20=34.80%, 50=0.01% 00:24:24.648 cpu : usr=82.02%, sys=14.54%, ctx=22, majf=0, minf=18 00:24:24.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:24.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.648 issued rwts: total=17664,9040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.648 00:24:24.648 Run status group 0 (all jobs): 00:24:24.648 READ: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=276MiB (289MB), run=2003-2003msec 00:24:24.648 WRITE: bw=78.8MiB/s (82.7MB/s), 78.8MiB/s-78.8MiB/s (82.7MB/s-82.7MB/s), io=141MiB (148MB), run=1792-1792msec 00:24:24.648 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.909 rmmod nvme_tcp 00:24:24.909 rmmod nvme_fabrics 00:24:24.909 rmmod nvme_keyring 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 959881 ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 959881 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 959881 ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 959881 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959881 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959881' 00:24:24.909 killing process with pid 959881 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 959881 00:24:24.909 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 959881 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.170 23:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.082 00:24:27.082 real 0m18.256s 00:24:27.082 user 1m8.053s 00:24:27.082 sys 0m8.087s 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.082 ************************************ 00:24:27.082 END TEST nvmf_fio_host 00:24:27.082 ************************************ 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.082 ************************************ 00:24:27.082 START TEST nvmf_failover 00:24:27.082 ************************************ 00:24:27.082 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:27.345 * Looking for test storage... 00:24:27.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.345 23:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.532 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:35.533 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:35.533 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:35.533 Found net devices under 0000:31:00.0: cvl_0_0 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:35.533 Found net devices under 0000:31:00.1: cvl_0_1 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.533 23:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:35.533 00:24:35.533 --- 10.0.0.2 ping statistics --- 00:24:35.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.533 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:24:35.533 00:24:35.533 --- 10.0.0.1 ping statistics --- 00:24:35.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.533 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=966249 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 966249 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 966249 ']' 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.533 23:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.533 [2024-07-24 23:12:53.218367] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:24:35.533 [2024-07-24 23:12:53.218415] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.533 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.802 [2024-07-24 23:12:53.307307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:35.802 [2024-07-24 23:12:53.377630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.802 [2024-07-24 23:12:53.377665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.802 [2024-07-24 23:12:53.377673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.802 [2024-07-24 23:12:53.377679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.802 [2024-07-24 23:12:53.377685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.802 [2024-07-24 23:12:53.377764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.802 [2024-07-24 23:12:53.377934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.802 [2024-07-24 23:12:53.378039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.373 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.635 [2024-07-24 23:12:54.187786] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.635 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:36.635 Malloc0 00:24:36.635 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.895 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.157 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.157 [2024-07-24 23:12:54.875370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.157 23:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.418 [2024-07-24 23:12:55.035761] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:37.418 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:37.418 [2024-07-24 23:12:55.200262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=966691 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 966691 /var/tmp/bdevperf.sock 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 966691 ']' 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.679 23:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.622 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.622 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:38.622 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.622 NVMe0n1 00:24:38.622 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.883 00:24:38.883 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=966949 00:24:38.883 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.883 23:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:39.826 23:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.087 [2024-07-24 23:12:57.715889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.715999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.087 [2024-07-24 23:12:57.716056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 [2024-07-24 23:12:57.716148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10496a0 is same with the state(5) to be set 00:24:40.088 23:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:43.389 23:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:43.389 00:24:43.389 23:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.389 [2024-07-24 23:13:01.166428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.389 [2024-07-24 23:13:01.166546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.390 [2024-07-24 23:13:01.166671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a460 is same with the state(5) to be set 00:24:43.650 23:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:46.949 23:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.949 [2024-07-24 23:13:04.342122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.949 23:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:47.652 23:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:47.913 [2024-07-24 23:13:05.520385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.913 [2024-07-24 23:13:05.520527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 [2024-07-24 23:13:05.520659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b390 is same with the state(5) to be set 00:24:47.914 23:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 966949 00:24:54.503 0 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 966691 ']' 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 966691' 00:24:54.503 killing process with pid 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 966691 00:24:54.503 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.503 [2024-07-24 23:12:55.284582] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:24:54.503 [2024-07-24 23:12:55.284656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid966691 ] 00:24:54.503 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.503 [2024-07-24 23:12:55.352833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.503 [2024-07-24 23:12:55.417029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.503 Running I/O for 15 seconds... 00:24:54.503 [2024-07-24 23:12:57.719810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.503 [2024-07-24 23:12:57.719995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.503 [2024-07-24 23:12:57.720002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.504 [2024-07-24 23:12:57.720623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.504 [2024-07-24 23:12:57.720630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.720990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.720999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.505 [2024-07-24 23:12:57.721201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.505 [2024-07-24 23:12:57.721228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:24:54.505 [2024-07-24 23:12:57.721235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.505 [2024-07-24 23:12:57.721251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.505 [2024-07-24 23:12:57.721257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:24:54.505 [2024-07-24 23:12:57.721264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.505 [2024-07-24 23:12:57.721271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.505 [2024-07-24 23:12:57.721277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.505 [2024-07-24 23:12:57.721284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.506 [2024-07-24 23:12:57.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:24:54.506 [2024-07-24 23:12:57.721880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.506 [2024-07-24 23:12:57.721888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.506 [2024-07-24 23:12:57.721894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.721901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.721909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.721916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.721921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.721927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.721934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.721942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.721948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.721954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.721961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.721968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.721974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.721980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.721987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.721995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.722185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.722191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.722198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.722205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99664 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99672 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.507 [2024-07-24 23:12:57.732431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.507 [2024-07-24 23:12:57.732437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:24:54.507 [2024-07-24 23:12:57.732444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732483] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24384b0 was disconnected and freed. reset controller. 00:24:54.507 [2024-07-24 23:12:57.732492] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:54.507 [2024-07-24 23:12:57.732518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.507 [2024-07-24 23:12:57.732527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.507 [2024-07-24 23:12:57.732544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.507 [2024-07-24 23:12:57.732559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.507 [2024-07-24 23:12:57.732573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.507 [2024-07-24 23:12:57.732586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.508 [2024-07-24 23:12:57.732632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2411ea0 (9): Bad file descriptor 00:24:54.508 [2024-07-24 23:12:57.736128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.508 [2024-07-24 23:12:57.859047] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.508 [2024-07-24 23:13:01.167269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.508 [2024-07-24 23:13:01.167774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.508 [2024-07-24 23:13:01.167780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.167985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.167992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.509 [2024-07-24 23:13:01.168089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.509 [2024-07-24 23:13:01.168412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.509 [2024-07-24 23:13:01.168421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.510 [2024-07-24 23:13:01.168931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.168989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.168996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.169005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.169012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.510 [2024-07-24 23:13:01.169028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.510 [2024-07-24 23:13:01.169037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.511 [2024-07-24 23:13:01.169123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:01.169370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.511 [2024-07-24 23:13:01.169400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.511 [2024-07-24 23:13:01.169407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54128 len:8 PRP1 0x0 PRP2 0x0 00:24:54.511 [2024-07-24 23:13:01.169414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24384b0 was disconnected and freed. reset controller. 00:24:54.511 [2024-07-24 23:13:01.169460] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:54.511 [2024-07-24 23:13:01.169479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.511 [2024-07-24 23:13:01.169488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.511 [2024-07-24 23:13:01.169504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.511 [2024-07-24 23:13:01.169519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.511 [2024-07-24 23:13:01.169534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:01.169541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.511 [2024-07-24 23:13:01.173065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.511 [2024-07-24 23:13:01.173088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2411ea0 (9): Bad file descriptor 00:24:54.511 [2024-07-24 23:13:01.217161] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.511 [2024-07-24 23:13:05.521687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.511 [2024-07-24 23:13:05.521891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.511 [2024-07-24 23:13:05.521900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.521989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.521999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.512 [2024-07-24 23:13:05.522382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.512 [2024-07-24 23:13:05.522398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.512 [2024-07-24 23:13:05.522416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.512 [2024-07-24 23:13:05.522432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.512 [2024-07-24 23:13:05.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.512 [2024-07-24 23:13:05.522448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.522906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.522923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.522939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.522955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.522971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.522987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.522996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.523003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.513 [2024-07-24 23:13:05.523019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.523037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.523053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.523069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.513 [2024-07-24 23:13:05.523086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.513 [2024-07-24 23:13:05.523095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:54.514 [2024-07-24 23:13:05.523475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.514 [2024-07-24 23:13:05.523717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:24:54.514 [2024-07-24 23:13:05.523723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.514 [2024-07-24 23:13:05.523731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.514 [2024-07-24 23:13:05.523736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.523914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.523922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.523928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.534409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.534417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.534425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.534438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.534444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.534451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.534465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:54.515 [2024-07-24 23:13:05.534491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:54.515 [2024-07-24 23:13:05.534497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:24:54.515 [2024-07-24 23:13:05.534504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534543] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244e490 was disconnected and freed. reset controller. 00:24:54.515 [2024-07-24 23:13:05.534552] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:54.515 [2024-07-24 23:13:05.534584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.515 [2024-07-24 23:13:05.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.515 [2024-07-24 23:13:05.534609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.515 [2024-07-24 23:13:05.534624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.515 [2024-07-24 23:13:05.534639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.515 [2024-07-24 23:13:05.534647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.515 [2024-07-24 23:13:05.534687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2411ea0 (9): Bad file descriptor 00:24:54.515 [2024-07-24 23:13:05.538207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.515 [2024-07-24 23:13:05.572782] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.515 00:24:54.515 Latency(us) 00:24:54.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.515 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.515 Verification LBA range: start 0x0 length 0x4000 00:24:54.515 NVMe0n1 : 15.01 11514.60 44.98 456.63 0.00 10664.06 781.65 20097.71 00:24:54.515 =================================================================================================================== 00:24:54.515 Total : 11514.60 44.98 456.63 0.00 10664.06 781.65 20097.71 00:24:54.515 Received shutdown signal, test time was about 15.000000 seconds 00:24:54.515 00:24:54.515 Latency(us) 00:24:54.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.515 =================================================================================================================== 00:24:54.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=969957 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 969957 /var/tmp/bdevperf.sock 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 969957 ']' 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.515 23:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.087 23:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.087 23:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:55.087 23:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:55.347 [2024-07-24 23:13:12.884600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:55.347 23:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:55.347 [2024-07-24 23:13:13.057001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:55.347 23:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.918 NVMe0n1 00:24:55.918 23:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.178 00:24:56.178 23:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.438 00:24:56.438 23:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.438 23:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:56.438 23:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.698 23:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:59.997 23:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.997 23:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:59.997 23:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=970978 00:24:59.997 23:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.997 23:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 970978 00:25:00.941 0 00:25:00.941 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:00.941 [2024-07-24 23:13:11.967756] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:00.941 [2024-07-24 23:13:11.967814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969957 ] 00:25:00.941 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.941 [2024-07-24 23:13:12.033438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.941 [2024-07-24 23:13:12.097887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.941 [2024-07-24 23:13:14.337122] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:00.941 [2024-07-24 23:13:14.337166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.941 [2024-07-24 23:13:14.337178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.941 [2024-07-24 23:13:14.337186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.941 [2024-07-24 23:13:14.337194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.941 [2024-07-24 23:13:14.337202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.941 [2024-07-24 23:13:14.337209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.941 [2024-07-24 23:13:14.337217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.941 [2024-07-24 23:13:14.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.941 [2024-07-24 23:13:14.337231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.941 [2024-07-24 23:13:14.337257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.941 [2024-07-24 23:13:14.337271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248dea0 (9): Bad file descriptor 00:25:00.941 [2024-07-24 23:13:14.389464] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:00.941 Running I/O for 1 seconds... 00:25:00.941 00:25:00.941 Latency(us) 00:25:00.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.941 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:00.941 Verification LBA range: start 0x0 length 0x4000 00:25:00.941 NVMe0n1 : 1.01 11598.80 45.31 0.00 0.00 10983.73 2321.07 14199.47 00:25:00.941 =================================================================================================================== 00:25:00.941 Total : 11598.80 45.31 0.00 0.00 10983.73 2321.07 14199.47 00:25:00.941 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.941 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:01.201 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:01.462 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:01.462 23:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:01.462 23:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:01.722 23:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 969957 ']' 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 969957' 00:25:05.021 killing process with pid 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 969957 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:05.021 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.282 rmmod nvme_tcp 00:25:05.282 rmmod nvme_fabrics 00:25:05.282 rmmod nvme_keyring 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 966249 ']' 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 966249 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 966249 ']' 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 966249 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 966249 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 966249' 00:25:05.282 killing process with pid 966249 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 966249 00:25:05.282 23:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 966249 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.543 23:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.455 00:25:07.455 real 0m40.320s 00:25:07.455 user 2m1.673s 00:25:07.455 sys 0m8.813s 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.455 ************************************ 00:25:07.455 END TEST nvmf_failover 00:25:07.455 ************************************ 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:07.455 23:13:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.717 ************************************ 00:25:07.717 START TEST nvmf_host_discovery 00:25:07.717 ************************************ 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:07.717 * Looking for test storage... 00:25:07.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:07.717 23:13:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:15.861 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:15.861 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.861 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:15.862 Found net devices under 0000:31:00.0: cvl_0_0 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:15.862 Found net devices under 0000:31:00.1: cvl_0_1 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:25:15.862 00:25:15.862 --- 10.0.0.2 ping statistics --- 00:25:15.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.862 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:25:15.862 00:25:15.862 --- 10.0.0.1 ping statistics --- 00:25:15.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.862 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=976666 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 976666 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 976666 ']' 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.862 23:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.123 [2024-07-24 23:13:33.677268] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:16.123 [2024-07-24 23:13:33.677332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.123 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.123 [2024-07-24 23:13:33.772926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.123 [2024-07-24 23:13:33.865453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.123 [2024-07-24 23:13:33.865511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.123 [2024-07-24 23:13:33.865519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.123 [2024-07-24 23:13:33.865526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.123 [2024-07-24 23:13:33.865533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.123 [2024-07-24 23:13:33.865557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.696 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.696 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:16.696 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.696 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.696 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.956 [2024-07-24 23:13:34.518881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.956 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.957 [2024-07-24 23:13:34.531090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.957 null0 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.957 null1 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=977006 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 977006 /tmp/host.sock 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 977006 ']' 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:16.957 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.957 23:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.957 [2024-07-24 23:13:34.627061] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:16.957 [2024-07-24 23:13:34.627122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977006 ] 00:25:16.957 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.957 [2024-07-24 23:13:34.698374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.217 [2024-07-24 23:13:34.773074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.788 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 [2024-07-24 23:13:35.770220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.049 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:18.311 23:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:18.882 [2024-07-24 23:13:36.470005] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:18.882 [2024-07-24 23:13:36.470025] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:18.882 [2024-07-24 23:13:36.470038] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:18.882 [2024-07-24 23:13:36.558312] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:19.143 [2024-07-24 23:13:36.784465] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.143 [2024-07-24 23:13:36.784488] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.403 23:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.403 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.404 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.665 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:19.665 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:19.665 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:19.665 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 [2024-07-24 23:13:37.326513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.666 [2024-07-24 23:13:37.327779] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:19.666 [2024-07-24 23:13:37.327807] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.666 [2024-07-24 23:13:37.418076] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:19.666 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:19.927 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.927 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:19.927 23:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:19.927 [2024-07-24 23:13:37.523933] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.927 [2024-07-24 23:13:37.523951] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.927 [2024-07-24 23:13:37.523956] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.870 [2024-07-24 23:13:38.602037] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:20.870 [2024-07-24 23:13:38.602061] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:20.870 [2024-07-24 23:13:38.606003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.870 [2024-07-24 23:13:38.606021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.870 [2024-07-24 23:13:38.606031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.870 [2024-07-24 23:13:38.606038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.870 [2024-07-24 23:13:38.606046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.870 [2024-07-24 23:13:38.606054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.870 [2024-07-24 23:13:38.606062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.870 [2024-07-24 23:13:38.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.870 [2024-07-24 23:13:38.606076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.870 [2024-07-24 23:13:38.616017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:20.870 [2024-07-24 23:13:38.626056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.870 [2024-07-24 23:13:38.626429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-24 23:13:38.626443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:20.870 [2024-07-24 23:13:38.626451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:20.870 [2024-07-24 23:13:38.626469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:20.870 [2024-07-24 23:13:38.626486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.870 [2024-07-24 23:13:38.626493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.870 [2024-07-24 23:13:38.626501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.870 [2024-07-24 23:13:38.626513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.870 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.870 [2024-07-24 23:13:38.636110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.870 [2024-07-24 23:13:38.636475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-24 23:13:38.636487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:20.870 [2024-07-24 23:13:38.636494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:20.870 [2024-07-24 23:13:38.636512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:20.870 [2024-07-24 23:13:38.636522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.870 [2024-07-24 23:13:38.636528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.870 [2024-07-24 23:13:38.636535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.870 [2024-07-24 23:13:38.636546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.870 [2024-07-24 23:13:38.646161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.870 [2024-07-24 23:13:38.646524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-24 23:13:38.646535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:20.870 [2024-07-24 23:13:38.646542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:20.870 [2024-07-24 23:13:38.646585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:20.870 [2024-07-24 23:13:38.646604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.871 [2024-07-24 23:13:38.646615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.871 [2024-07-24 23:13:38.646621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.871 [2024-07-24 23:13:38.646633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.132 [2024-07-24 23:13:38.656215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.132 [2024-07-24 23:13:38.656575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.132 [2024-07-24 23:13:38.656587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:21.132 [2024-07-24 23:13:38.656594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:21.132 [2024-07-24 23:13:38.656612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:21.132 [2024-07-24 23:13:38.656622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.132 [2024-07-24 23:13:38.656628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.132 [2024-07-24 23:13:38.656635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.132 [2024-07-24 23:13:38.656645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.132 [2024-07-24 23:13:38.666267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.132 [2024-07-24 23:13:38.666623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.132 [2024-07-24 23:13:38.666635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:21.132 [2024-07-24 23:13:38.666642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:21.132 [2024-07-24 23:13:38.666659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:21.132 [2024-07-24 23:13:38.666675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.132 [2024-07-24 23:13:38.666682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.132 [2024-07-24 23:13:38.666689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.132 [2024-07-24 23:13:38.666699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.132 [2024-07-24 23:13:38.676319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.132 [2024-07-24 23:13:38.676622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.132 [2024-07-24 23:13:38.676634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:21.132 [2024-07-24 23:13:38.676642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:21.132 [2024-07-24 23:13:38.676653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:21.132 [2024-07-24 23:13:38.676670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.132 [2024-07-24 23:13:38.676677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.132 [2024-07-24 23:13:38.676684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.132 [2024-07-24 23:13:38.676695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.132 [2024-07-24 23:13:38.686373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.132 [2024-07-24 23:13:38.686735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.132 [2024-07-24 23:13:38.686746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c19c0 with addr=10.0.0.2, port=4420 00:25:21.132 [2024-07-24 23:13:38.686758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19c0 is same with the state(5) to be set 00:25:21.132 [2024-07-24 23:13:38.686770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c19c0 (9): Bad file descriptor 00:25:21.132 [2024-07-24 23:13:38.686797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.132 [2024-07-24 23:13:38.686804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.132 [2024-07-24 23:13:38.686811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.132 [2024-07-24 23:13:38.686822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.132 [2024-07-24 23:13:38.689797] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:21.132 [2024-07-24 23:13:38.689814] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.132 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.133 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.393 23:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.333 [2024-07-24 23:13:40.057968] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:22.333 [2024-07-24 23:13:40.057988] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:22.333 [2024-07-24 23:13:40.058002] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.593 [2024-07-24 23:13:40.145260] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:22.593 [2024-07-24 23:13:40.211351] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.593 [2024-07-24 23:13:40.211383] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.593 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 request: 00:25:22.594 { 00:25:22.594 "name": "nvme", 00:25:22.594 "trtype": "tcp", 00:25:22.594 "traddr": "10.0.0.2", 00:25:22.594 "adrfam": "ipv4", 00:25:22.594 "trsvcid": "8009", 00:25:22.594 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:22.594 "wait_for_attach": true, 00:25:22.594 "method": "bdev_nvme_start_discovery", 00:25:22.594 "req_id": 1 00:25:22.594 } 00:25:22.594 Got JSON-RPC error response 00:25:22.594 response: 00:25:22.594 { 00:25:22.594 "code": -17, 00:25:22.594 "message": "File exists" 00:25:22.594 } 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 request: 00:25:22.594 { 00:25:22.594 "name": "nvme_second", 00:25:22.594 "trtype": "tcp", 00:25:22.594 "traddr": "10.0.0.2", 00:25:22.594 "adrfam": "ipv4", 00:25:22.594 "trsvcid": "8009", 00:25:22.594 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:22.594 "wait_for_attach": true, 00:25:22.594 "method": "bdev_nvme_start_discovery", 00:25:22.594 "req_id": 1 00:25:22.594 } 00:25:22.594 Got JSON-RPC error response 00:25:22.594 response: 00:25:22.594 { 00:25:22.594 "code": -17, 00:25:22.594 "message": "File exists" 00:25:22.594 } 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.854 23:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.796 [2024-07-24 23:13:41.483025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.796 [2024-07-24 23:13:41.483066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4910 with addr=10.0.0.2, port=8010 00:25:23.796 [2024-07-24 23:13:41.483082] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:23.796 [2024-07-24 23:13:41.483089] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:23.796 [2024-07-24 23:13:41.483097] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:24.738 [2024-07-24 23:13:42.485276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.738 [2024-07-24 23:13:42.485300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4910 with addr=10.0.0.2, port=8010 00:25:24.738 [2024-07-24 23:13:42.485311] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:24.738 [2024-07-24 23:13:42.485318] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:24.738 [2024-07-24 23:13:42.485324] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:26.121 [2024-07-24 23:13:43.487235] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:26.121 request: 00:25:26.121 { 00:25:26.121 "name": "nvme_second", 00:25:26.121 "trtype": "tcp", 00:25:26.121 "traddr": "10.0.0.2", 00:25:26.121 "adrfam": "ipv4", 00:25:26.121 "trsvcid": "8010", 00:25:26.121 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:26.121 "wait_for_attach": false, 00:25:26.121 "attach_timeout_ms": 3000, 00:25:26.121 "method": "bdev_nvme_start_discovery", 00:25:26.121 "req_id": 1 00:25:26.121 } 00:25:26.121 Got JSON-RPC error response 00:25:26.121 response: 00:25:26.121 { 00:25:26.121 "code": -110, 00:25:26.121 "message": "Connection timed out" 00:25:26.121 } 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 977006 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:26.121 rmmod nvme_tcp 00:25:26.121 rmmod nvme_fabrics 00:25:26.121 rmmod nvme_keyring 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 976666 ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 976666 ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 976666' 00:25:26.121 killing process with pid 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 976666 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.121 23:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.763 00:25:28.763 real 0m20.620s 00:25:28.763 user 0m23.103s 00:25:28.763 sys 0m7.638s 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.763 ************************************ 00:25:28.763 END TEST nvmf_host_discovery 00:25:28.763 ************************************ 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.763 ************************************ 00:25:28.763 START TEST nvmf_host_multipath_status 00:25:28.763 ************************************ 00:25:28.763 23:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.763 * Looking for test storage... 00:25:28.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.763 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.764 23:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:36.938 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:36.938 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:36.938 Found net devices under 0000:31:00.0: cvl_0_0 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.938 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:36.938 Found net devices under 0000:31:00.1: cvl_0_1 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.939 23:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:25:36.939 00:25:36.939 --- 10.0.0.2 ping statistics --- 00:25:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.939 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:25:36.939 00:25:36.939 --- 10.0.0.1 ping statistics --- 00:25:36.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.939 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=983536 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 983536 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 983536 ']' 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.939 23:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:36.939 [2024-07-24 23:13:54.386003] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:25:36.939 [2024-07-24 23:13:54.386071] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.939 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.939 [2024-07-24 23:13:54.467907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:36.939 [2024-07-24 23:13:54.541273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.939 [2024-07-24 23:13:54.541311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.939 [2024-07-24 23:13:54.541319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.939 [2024-07-24 23:13:54.541326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.939 [2024-07-24 23:13:54.541331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.939 [2024-07-24 23:13:54.541477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.939 [2024-07-24 23:13:54.541478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=983536 00:25:37.510 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:37.770 [2024-07-24 23:13:55.337001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.770 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:37.770 Malloc0 00:25:37.771 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:38.031 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.291 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.291 [2024-07-24 23:13:55.971377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.291 23:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.552 [2024-07-24 23:13:56.123722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=983902 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 983902 /var/tmp/bdevperf.sock 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 983902 ']' 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.552 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.496 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.496 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:39.496 23:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:39.496 23:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:39.756 Nvme0n1 00:25:39.756 23:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:40.327 Nvme0n1 00:25:40.327 23:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:40.327 23:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:42.240 23:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:42.241 23:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:42.501 23:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:42.501 23:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.885 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.146 23:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.407 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.407 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:44.407 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.407 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.668 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.668 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:44.668 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:44.668 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.929 23:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:45.870 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:45.870 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:45.870 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.870 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.130 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.130 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:46.130 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.130 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.392 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.392 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.392 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.392 23:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.392 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.392 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.392 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.392 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.653 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.914 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.914 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:46.914 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.174 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:47.174 23:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:48.559 23:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:48.559 23:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.559 23:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.559 23:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.559 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.820 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.820 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.820 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.820 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.079 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.079 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.080 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.341 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.341 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:49.341 23:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:49.601 23:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:49.601 23:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.986 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.247 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.247 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.247 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.247 23:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.247 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.247 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.247 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.247 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.507 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.508 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:51.508 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.508 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.797 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.797 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:51.797 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:51.797 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:52.058 23:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:52.999 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:52.999 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.999 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.999 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.261 23:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.522 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.522 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.522 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.522 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.783 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.043 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.044 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:54.044 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:54.304 23:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.304 23:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:55.246 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:55.246 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:55.246 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.246 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.506 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.506 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.506 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.506 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.766 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.027 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.027 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:56.027 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.027 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.287 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.287 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.287 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.287 23:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.287 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.287 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:56.547 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:56.547 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:56.808 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.808 23:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:57.749 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:57.749 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:57.749 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.749 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.009 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.009 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.009 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.009 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.271 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.271 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.271 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.271 23:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.271 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.271 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.271 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.271 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.531 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.531 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.531 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.531 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:58.791 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.051 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:59.311 23:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:00.250 23:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:00.250 23:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:00.250 23:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.250 23:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.510 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.510 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.511 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.771 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.771 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.771 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.771 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.031 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.290 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.290 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:01.290 23:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.550 23:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:01.550 23:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.933 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.194 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.194 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.194 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.194 23:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.455 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.716 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.716 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:03.716 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.976 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.976 23:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.359 23:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.359 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.359 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.359 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.359 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.620 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.620 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.620 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.620 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.880 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 983902 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 983902 ']' 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 983902 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 983902 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 983902' 00:26:06.140 killing process with pid 983902 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 983902 00:26:06.140 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 983902 00:26:06.140 Connection closed with partial response: 00:26:06.140 00:26:06.140 00:26:06.402 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 983902 00:26:06.402 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.402 [2024-07-24 23:13:56.184730] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:06.402 [2024-07-24 23:13:56.184790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983902 ] 00:26:06.402 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.403 [2024-07-24 23:13:56.240273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.403 [2024-07-24 23:13:56.292311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.403 Running I/O for 90 seconds... 00:26:06.403 [2024-07-24 23:14:09.477056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.403 [2024-07-24 23:14:09.477091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.477615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.477620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.479061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.479074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.479092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.479097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.479115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.479120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.479136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.403 [2024-07-24 23:14:09.479141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.403 [2024-07-24 23:14:09.479157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.404 [2024-07-24 23:14:09.479162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.404 [2024-07-24 23:14:09.479183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.404 [2024-07-24 23:14:09.479204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:09.479941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:09.479946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:21.681499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.404 [2024-07-24 23:14:21.681542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:21.682454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.404 [2024-07-24 23:14:21.682470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:21.682483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:21.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:21.682499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:21.682504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.404 [2024-07-24 23:14:21.682515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.404 [2024-07-24 23:14:21.682520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.405 [2024-07-24 23:14:21.682535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.405 [2024-07-24 23:14:21.682611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.405 [2024-07-24 23:14:21.682626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.405 [2024-07-24 23:14:21.682641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.405 [2024-07-24 23:14:21.682658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.405 [2024-07-24 23:14:21.682684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.405 [2024-07-24 23:14:21.682689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.405 Received shutdown signal, test time was about 25.808629 seconds 00:26:06.405 00:26:06.405 Latency(us) 00:26:06.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.405 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.405 Verification LBA range: start 0x0 length 0x4000 00:26:06.405 Nvme0n1 : 25.81 10971.05 42.86 0.00 0.00 11648.30 423.25 3019898.88 00:26:06.405 =================================================================================================================== 00:26:06.405 Total : 10971.05 42.86 0.00 0.00 11648.30 423.25 3019898.88 00:26:06.405 23:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.405 rmmod nvme_tcp 00:26:06.405 rmmod nvme_fabrics 00:26:06.405 rmmod nvme_keyring 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 983536 ']' 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 983536 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 983536 ']' 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 983536 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.405 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 983536 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 983536' 00:26:06.666 killing process with pid 983536 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 983536 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 983536 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.666 23:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.212 00:26:09.212 real 0m40.500s 00:26:09.212 user 1m42.104s 00:26:09.212 sys 0m11.623s 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.212 ************************************ 00:26:09.212 END TEST nvmf_host_multipath_status 00:26:09.212 ************************************ 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.212 ************************************ 00:26:09.212 START TEST nvmf_discovery_remove_ifc 00:26:09.212 ************************************ 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:09.212 * Looking for test storage... 00:26:09.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.212 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.213 23:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.367 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:17.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:17.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:17.368 Found net devices under 0000:31:00.0: cvl_0_0 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:17.368 Found net devices under 0000:31:00.1: cvl_0_1 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.368 23:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:26:17.368 00:26:17.368 --- 10.0.0.2 ping statistics --- 00:26:17.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.368 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:26:17.368 00:26:17.368 --- 10.0.0.1 ping statistics --- 00:26:17.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.368 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.368 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=994132 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 994132 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 994132 ']' 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.369 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.369 [2024-07-24 23:14:35.107004] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:17.369 [2024-07-24 23:14:35.107068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.369 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.629 [2024-07-24 23:14:35.205150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.629 [2024-07-24 23:14:35.297284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.629 [2024-07-24 23:14:35.297340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.629 [2024-07-24 23:14:35.297348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.629 [2024-07-24 23:14:35.297355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.629 [2024-07-24 23:14:35.297361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.629 [2024-07-24 23:14:35.297388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.200 23:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.200 [2024-07-24 23:14:35.953655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.200 [2024-07-24 23:14:35.961960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:18.200 null0 00:26:18.460 [2024-07-24 23:14:35.993863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=994360 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 994360 /tmp/host.sock 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 994360 ']' 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:18.460 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 [2024-07-24 23:14:36.067659] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:26:18.460 [2024-07-24 23:14:36.067721] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994360 ] 00:26:18.460 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.460 [2024-07-24 23:14:36.139274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.460 [2024-07-24 23:14:36.214610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.460 23:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.423 [2024-07-24 23:14:37.982017] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:20.423 [2024-07-24 23:14:37.982039] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:20.423 [2024-07-24 23:14:37.982052] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:20.423 [2024-07-24 23:14:38.070324] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:20.683 [2024-07-24 23:14:38.297547] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:20.683 [2024-07-24 23:14:38.297598] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:20.683 [2024-07-24 23:14:38.297620] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:20.683 [2024-07-24 23:14:38.297635] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:20.683 [2024-07-24 23:14:38.297655] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.683 [2024-07-24 23:14:38.301977] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdab520 was disconnected and freed. delete nvme_qpair. 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:20.683 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.943 23:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.883 23:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.823 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.823 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.823 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.823 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.083 23:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.025 23:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.966 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.227 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.227 23:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.167 [2024-07-24 23:14:43.738153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:26.167 [2024-07-24 23:14:43.738200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-24 23:14:43.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.167 [2024-07-24 23:14:43.738221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-24 23:14:43.738229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.167 [2024-07-24 23:14:43.738237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-24 23:14:43.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.167 [2024-07-24 23:14:43.738251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-24 23:14:43.738258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.167 [2024-07-24 23:14:43.738266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-24 23:14:43.738273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.167 [2024-07-24 23:14:43.738280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd720c0 is same with the state(5) to be set 00:26:26.167 [2024-07-24 23:14:43.748172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd720c0 (9): Bad file descriptor 00:26:26.167 [2024-07-24 23:14:43.758211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.167 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.168 23:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.108 [2024-07-24 23:14:44.760776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:27.108 [2024-07-24 23:14:44.760814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd720c0 with addr=10.0.0.2, port=4420 00:26:27.108 [2024-07-24 23:14:44.760825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd720c0 is same with the state(5) to be set 00:26:27.108 [2024-07-24 23:14:44.760847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd720c0 (9): Bad file descriptor 00:26:27.108 [2024-07-24 23:14:44.761214] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:27.108 [2024-07-24 23:14:44.761243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.108 [2024-07-24 23:14:44.761251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.108 [2024-07-24 23:14:44.761259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.108 [2024-07-24 23:14:44.761273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.108 [2024-07-24 23:14:44.761281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.108 23:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.108 23:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.108 23:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.048 [2024-07-24 23:14:45.763654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:28.048 [2024-07-24 23:14:45.763673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:28.048 [2024-07-24 23:14:45.763680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:28.048 [2024-07-24 23:14:45.763687] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:28.048 [2024-07-24 23:14:45.763699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.048 [2024-07-24 23:14:45.763717] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:28.048 [2024-07-24 23:14:45.763738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.048 [2024-07-24 23:14:45.763748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.048 [2024-07-24 23:14:45.763763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.048 [2024-07-24 23:14:45.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.048 [2024-07-24 23:14:45.763778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.048 [2024-07-24 23:14:45.763785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.049 [2024-07-24 23:14:45.763793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.049 [2024-07-24 23:14:45.763800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.049 [2024-07-24 23:14:45.763808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.049 [2024-07-24 23:14:45.763815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.049 [2024-07-24 23:14:45.763823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:28.049 [2024-07-24 23:14:45.764131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd71520 (9): Bad file descriptor 00:26:28.049 [2024-07-24 23:14:45.765142] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:28.049 [2024-07-24 23:14:45.765153] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.049 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.309 23:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.309 23:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:28.309 23:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.251 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.512 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.512 23:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.081 [2024-07-24 23:14:47.818943] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.081 [2024-07-24 23:14:47.818963] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.081 [2024-07-24 23:14:47.818977] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.341 [2024-07-24 23:14:47.948372] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:30.341 [2024-07-24 23:14:48.007108] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.341 [2024-07-24 23:14:48.007145] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.341 [2024-07-24 23:14:48.007164] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.341 [2024-07-24 23:14:48.007177] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:30.341 [2024-07-24 23:14:48.007184] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.341 [2024-07-24 23:14:48.015234] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdb4870 was disconnected and freed. delete nvme_qpair. 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 994360 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 994360 ']' 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 994360 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:30.341 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994360 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994360' 00:26:30.601 killing process with pid 994360 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 994360 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 994360 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.601 rmmod nvme_tcp 00:26:30.601 rmmod nvme_fabrics 00:26:30.601 rmmod nvme_keyring 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 994132 ']' 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 994132 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 994132 ']' 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 994132 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:30.601 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994132 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994132' 00:26:30.861 killing process with pid 994132 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 994132 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 994132 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.861 23:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.397 00:26:33.397 real 0m24.091s 00:26:33.397 user 0m27.576s 00:26:33.397 sys 0m7.490s 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.397 ************************************ 00:26:33.397 END TEST nvmf_discovery_remove_ifc 00:26:33.397 ************************************ 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.397 ************************************ 00:26:33.397 START TEST nvmf_identify_kernel_target 00:26:33.397 ************************************ 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:33.397 * Looking for test storage... 00:26:33.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.397 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.398 23:14:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:41.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.530 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:41.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:41.531 Found net devices under 0000:31:00.0: cvl_0_0 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:41.531 Found net devices under 0000:31:00.1: cvl_0_1 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:41.531 00:26:41.531 --- 10.0.0.2 ping statistics --- 00:26:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.531 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:41.531 23:14:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:26:41.531 00:26:41.531 --- 10.0.0.1 ping statistics --- 00:26:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.531 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:41.531 23:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:45.735 Waiting for block devices as requested 00:26:45.735 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:45.735 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:45.995 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:45.995 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:45.995 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:46.255 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:46.255 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:46.255 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:46.516 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:46.516 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:46.516 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:46.516 No valid GPT data, bailing 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:46.516 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:26:46.777 00:26:46.777 Discovery Log Number of Records 2, Generation counter 2 00:26:46.777 =====Discovery Log Entry 0====== 00:26:46.777 trtype: tcp 00:26:46.777 adrfam: ipv4 00:26:46.777 subtype: current discovery subsystem 00:26:46.777 treq: not specified, sq flow control disable supported 00:26:46.777 portid: 1 00:26:46.777 trsvcid: 4420 00:26:46.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:46.777 traddr: 10.0.0.1 00:26:46.777 eflags: none 00:26:46.777 sectype: none 00:26:46.777 =====Discovery Log Entry 1====== 00:26:46.777 trtype: tcp 00:26:46.777 adrfam: ipv4 00:26:46.777 subtype: nvme subsystem 00:26:46.777 treq: not specified, sq flow control disable supported 00:26:46.777 portid: 1 00:26:46.777 trsvcid: 4420 00:26:46.777 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:46.777 traddr: 10.0.0.1 00:26:46.777 eflags: none 00:26:46.777 sectype: none 00:26:46.777 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:46.777 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:46.777 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.777 ===================================================== 00:26:46.777 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:46.777 ===================================================== 00:26:46.777 Controller Capabilities/Features 00:26:46.777 ================================ 00:26:46.777 Vendor ID: 0000 00:26:46.777 Subsystem Vendor ID: 0000 00:26:46.777 Serial Number: 70867191c855523ef42d 00:26:46.777 Model Number: Linux 00:26:46.777 Firmware Version: 6.7.0-68 00:26:46.777 Recommended Arb Burst: 0 00:26:46.777 IEEE OUI Identifier: 00 00 00 00:26:46.777 Multi-path I/O 00:26:46.777 May have multiple subsystem ports: No 00:26:46.777 May have multiple controllers: No 00:26:46.777 Associated with SR-IOV VF: No 00:26:46.777 Max Data Transfer Size: Unlimited 00:26:46.777 Max Number of Namespaces: 0 00:26:46.777 Max Number of I/O Queues: 1024 00:26:46.777 NVMe Specification Version (VS): 1.3 00:26:46.777 NVMe Specification Version (Identify): 1.3 00:26:46.777 Maximum Queue Entries: 1024 00:26:46.777 Contiguous Queues Required: No 00:26:46.777 Arbitration Mechanisms Supported 00:26:46.777 Weighted Round Robin: Not Supported 00:26:46.777 Vendor Specific: Not Supported 00:26:46.777 Reset Timeout: 7500 ms 00:26:46.777 Doorbell Stride: 4 bytes 00:26:46.777 NVM Subsystem Reset: Not Supported 00:26:46.777 Command Sets Supported 00:26:46.777 NVM Command Set: Supported 00:26:46.777 Boot Partition: Not Supported 00:26:46.777 Memory Page Size Minimum: 4096 bytes 00:26:46.777 Memory Page Size Maximum: 4096 bytes 00:26:46.777 Persistent Memory Region: Not Supported 00:26:46.777 Optional Asynchronous Events Supported 00:26:46.777 Namespace Attribute Notices: Not Supported 00:26:46.777 Firmware Activation Notices: Not Supported 00:26:46.777 ANA Change Notices: Not Supported 00:26:46.777 PLE Aggregate Log Change Notices: Not Supported 00:26:46.777 LBA Status Info Alert Notices: Not Supported 00:26:46.777 EGE Aggregate Log Change Notices: Not Supported 00:26:46.777 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.777 Zone Descriptor Change Notices: Not Supported 00:26:46.777 Discovery Log Change Notices: Supported 00:26:46.777 Controller Attributes 00:26:46.777 128-bit Host Identifier: Not Supported 00:26:46.777 Non-Operational Permissive Mode: Not Supported 00:26:46.777 NVM Sets: Not Supported 00:26:46.777 Read Recovery Levels: Not Supported 00:26:46.777 Endurance Groups: Not Supported 00:26:46.777 Predictable Latency Mode: Not Supported 00:26:46.777 Traffic Based Keep ALive: Not Supported 00:26:46.777 Namespace Granularity: Not Supported 00:26:46.777 SQ Associations: Not Supported 00:26:46.777 UUID List: Not Supported 00:26:46.777 Multi-Domain Subsystem: Not Supported 00:26:46.777 Fixed Capacity Management: Not Supported 00:26:46.777 Variable Capacity Management: Not Supported 00:26:46.777 Delete Endurance Group: Not Supported 00:26:46.777 Delete NVM Set: Not Supported 00:26:46.777 Extended LBA Formats Supported: Not Supported 00:26:46.777 Flexible Data Placement Supported: Not Supported 00:26:46.777 00:26:46.777 Controller Memory Buffer Support 00:26:46.777 ================================ 00:26:46.777 Supported: No 00:26:46.777 00:26:46.777 Persistent Memory Region Support 00:26:46.777 ================================ 00:26:46.777 Supported: No 00:26:46.777 00:26:46.777 Admin Command Set Attributes 00:26:46.777 ============================ 00:26:46.777 Security Send/Receive: Not Supported 00:26:46.777 Format NVM: Not Supported 00:26:46.777 Firmware Activate/Download: Not Supported 00:26:46.777 Namespace Management: Not Supported 00:26:46.777 Device Self-Test: Not Supported 00:26:46.777 Directives: Not Supported 00:26:46.777 NVMe-MI: Not Supported 00:26:46.777 Virtualization Management: Not Supported 00:26:46.777 Doorbell Buffer Config: Not Supported 00:26:46.777 Get LBA Status Capability: Not Supported 00:26:46.777 Command & Feature Lockdown Capability: Not Supported 00:26:46.777 Abort Command Limit: 1 00:26:46.777 Async Event Request Limit: 1 00:26:46.777 Number of Firmware Slots: N/A 00:26:46.777 Firmware Slot 1 Read-Only: N/A 00:26:46.777 Firmware Activation Without Reset: N/A 00:26:46.777 Multiple Update Detection Support: N/A 00:26:46.777 Firmware Update Granularity: No Information Provided 00:26:46.777 Per-Namespace SMART Log: No 00:26:46.777 Asymmetric Namespace Access Log Page: Not Supported 00:26:46.777 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:46.777 Command Effects Log Page: Not Supported 00:26:46.777 Get Log Page Extended Data: Supported 00:26:46.777 Telemetry Log Pages: Not Supported 00:26:46.777 Persistent Event Log Pages: Not Supported 00:26:46.777 Supported Log Pages Log Page: May Support 00:26:46.777 Commands Supported & Effects Log Page: Not Supported 00:26:46.777 Feature Identifiers & Effects Log Page:May Support 00:26:46.777 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.777 Data Area 4 for Telemetry Log: Not Supported 00:26:46.777 Error Log Page Entries Supported: 1 00:26:46.777 Keep Alive: Not Supported 00:26:46.777 00:26:46.777 NVM Command Set Attributes 00:26:46.777 ========================== 00:26:46.777 Submission Queue Entry Size 00:26:46.777 Max: 1 00:26:46.777 Min: 1 00:26:46.777 Completion Queue Entry Size 00:26:46.777 Max: 1 00:26:46.777 Min: 1 00:26:46.777 Number of Namespaces: 0 00:26:46.777 Compare Command: Not Supported 00:26:46.777 Write Uncorrectable Command: Not Supported 00:26:46.777 Dataset Management Command: Not Supported 00:26:46.777 Write Zeroes Command: Not Supported 00:26:46.777 Set Features Save Field: Not Supported 00:26:46.777 Reservations: Not Supported 00:26:46.777 Timestamp: Not Supported 00:26:46.778 Copy: Not Supported 00:26:46.778 Volatile Write Cache: Not Present 00:26:46.778 Atomic Write Unit (Normal): 1 00:26:46.778 Atomic Write Unit (PFail): 1 00:26:46.778 Atomic Compare & Write Unit: 1 00:26:46.778 Fused Compare & Write: Not Supported 00:26:46.778 Scatter-Gather List 00:26:46.778 SGL Command Set: Supported 00:26:46.778 SGL Keyed: Not Supported 00:26:46.778 SGL Bit Bucket Descriptor: Not Supported 00:26:46.778 SGL Metadata Pointer: Not Supported 00:26:46.778 Oversized SGL: Not Supported 00:26:46.778 SGL Metadata Address: Not Supported 00:26:46.778 SGL Offset: Supported 00:26:46.778 Transport SGL Data Block: Not Supported 00:26:46.778 Replay Protected Memory Block: Not Supported 00:26:46.778 00:26:46.778 Firmware Slot Information 00:26:46.778 ========================= 00:26:46.778 Active slot: 0 00:26:46.778 00:26:46.778 00:26:46.778 Error Log 00:26:46.778 ========= 00:26:46.778 00:26:46.778 Active Namespaces 00:26:46.778 ================= 00:26:46.778 Discovery Log Page 00:26:46.778 ================== 00:26:46.778 Generation Counter: 2 00:26:46.778 Number of Records: 2 00:26:46.778 Record Format: 0 00:26:46.778 00:26:46.778 Discovery Log Entry 0 00:26:46.778 ---------------------- 00:26:46.778 Transport Type: 3 (TCP) 00:26:46.778 Address Family: 1 (IPv4) 00:26:46.778 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:46.778 Entry Flags: 00:26:46.778 Duplicate Returned Information: 0 00:26:46.778 Explicit Persistent Connection Support for Discovery: 0 00:26:46.778 Transport Requirements: 00:26:46.778 Secure Channel: Not Specified 00:26:46.778 Port ID: 1 (0x0001) 00:26:46.778 Controller ID: 65535 (0xffff) 00:26:46.778 Admin Max SQ Size: 32 00:26:46.778 Transport Service Identifier: 4420 00:26:46.778 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:46.778 Transport Address: 10.0.0.1 00:26:46.778 Discovery Log Entry 1 00:26:46.778 ---------------------- 00:26:46.778 Transport Type: 3 (TCP) 00:26:46.778 Address Family: 1 (IPv4) 00:26:46.778 Subsystem Type: 2 (NVM Subsystem) 00:26:46.778 Entry Flags: 00:26:46.778 Duplicate Returned Information: 0 00:26:46.778 Explicit Persistent Connection Support for Discovery: 0 00:26:46.778 Transport Requirements: 00:26:46.778 Secure Channel: Not Specified 00:26:46.778 Port ID: 1 (0x0001) 00:26:46.778 Controller ID: 65535 (0xffff) 00:26:46.778 Admin Max SQ Size: 32 00:26:46.778 Transport Service Identifier: 4420 00:26:46.778 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:46.778 Transport Address: 10.0.0.1 00:26:46.778 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.778 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.778 get_feature(0x01) failed 00:26:46.778 get_feature(0x02) failed 00:26:46.778 get_feature(0x04) failed 00:26:46.778 ===================================================== 00:26:46.778 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.778 ===================================================== 00:26:46.778 Controller Capabilities/Features 00:26:46.778 ================================ 00:26:46.778 Vendor ID: 0000 00:26:46.778 Subsystem Vendor ID: 0000 00:26:46.778 Serial Number: 5ab4c96959f362c42f1a 00:26:46.778 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:46.778 Firmware Version: 6.7.0-68 00:26:46.778 Recommended Arb Burst: 6 00:26:46.778 IEEE OUI Identifier: 00 00 00 00:26:46.778 Multi-path I/O 00:26:46.778 May have multiple subsystem ports: Yes 00:26:46.778 May have multiple controllers: Yes 00:26:46.778 Associated with SR-IOV VF: No 00:26:46.778 Max Data Transfer Size: Unlimited 00:26:46.778 Max Number of Namespaces: 1024 00:26:46.778 Max Number of I/O Queues: 128 00:26:46.778 NVMe Specification Version (VS): 1.3 00:26:46.778 NVMe Specification Version (Identify): 1.3 00:26:46.778 Maximum Queue Entries: 1024 00:26:46.778 Contiguous Queues Required: No 00:26:46.778 Arbitration Mechanisms Supported 00:26:46.778 Weighted Round Robin: Not Supported 00:26:46.778 Vendor Specific: Not Supported 00:26:46.778 Reset Timeout: 7500 ms 00:26:46.778 Doorbell Stride: 4 bytes 00:26:46.778 NVM Subsystem Reset: Not Supported 00:26:46.778 Command Sets Supported 00:26:46.778 NVM Command Set: Supported 00:26:46.778 Boot Partition: Not Supported 00:26:46.778 Memory Page Size Minimum: 4096 bytes 00:26:46.778 Memory Page Size Maximum: 4096 bytes 00:26:46.778 Persistent Memory Region: Not Supported 00:26:46.778 Optional Asynchronous Events Supported 00:26:46.778 Namespace Attribute Notices: Supported 00:26:46.778 Firmware Activation Notices: Not Supported 00:26:46.778 ANA Change Notices: Supported 00:26:46.778 PLE Aggregate Log Change Notices: Not Supported 00:26:46.778 LBA Status Info Alert Notices: Not Supported 00:26:46.778 EGE Aggregate Log Change Notices: Not Supported 00:26:46.778 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.778 Zone Descriptor Change Notices: Not Supported 00:26:46.778 Discovery Log Change Notices: Not Supported 00:26:46.778 Controller Attributes 00:26:46.778 128-bit Host Identifier: Supported 00:26:46.778 Non-Operational Permissive Mode: Not Supported 00:26:46.778 NVM Sets: Not Supported 00:26:46.778 Read Recovery Levels: Not Supported 00:26:46.778 Endurance Groups: Not Supported 00:26:46.778 Predictable Latency Mode: Not Supported 00:26:46.778 Traffic Based Keep ALive: Supported 00:26:46.778 Namespace Granularity: Not Supported 00:26:46.778 SQ Associations: Not Supported 00:26:46.778 UUID List: Not Supported 00:26:46.778 Multi-Domain Subsystem: Not Supported 00:26:46.778 Fixed Capacity Management: Not Supported 00:26:46.778 Variable Capacity Management: Not Supported 00:26:46.778 Delete Endurance Group: Not Supported 00:26:46.778 Delete NVM Set: Not Supported 00:26:46.778 Extended LBA Formats Supported: Not Supported 00:26:46.778 Flexible Data Placement Supported: Not Supported 00:26:46.778 00:26:46.778 Controller Memory Buffer Support 00:26:46.778 ================================ 00:26:46.778 Supported: No 00:26:46.778 00:26:46.778 Persistent Memory Region Support 00:26:46.778 ================================ 00:26:46.778 Supported: No 00:26:46.778 00:26:46.778 Admin Command Set Attributes 00:26:46.778 ============================ 00:26:46.778 Security Send/Receive: Not Supported 00:26:46.778 Format NVM: Not Supported 00:26:46.778 Firmware Activate/Download: Not Supported 00:26:46.778 Namespace Management: Not Supported 00:26:46.778 Device Self-Test: Not Supported 00:26:46.778 Directives: Not Supported 00:26:46.778 NVMe-MI: Not Supported 00:26:46.778 Virtualization Management: Not Supported 00:26:46.778 Doorbell Buffer Config: Not Supported 00:26:46.778 Get LBA Status Capability: Not Supported 00:26:46.778 Command & Feature Lockdown Capability: Not Supported 00:26:46.778 Abort Command Limit: 4 00:26:46.778 Async Event Request Limit: 4 00:26:46.778 Number of Firmware Slots: N/A 00:26:46.778 Firmware Slot 1 Read-Only: N/A 00:26:46.778 Firmware Activation Without Reset: N/A 00:26:46.778 Multiple Update Detection Support: N/A 00:26:46.778 Firmware Update Granularity: No Information Provided 00:26:46.778 Per-Namespace SMART Log: Yes 00:26:46.778 Asymmetric Namespace Access Log Page: Supported 00:26:46.778 ANA Transition Time : 10 sec 00:26:46.778 00:26:46.778 Asymmetric Namespace Access Capabilities 00:26:46.778 ANA Optimized State : Supported 00:26:46.778 ANA Non-Optimized State : Supported 00:26:46.778 ANA Inaccessible State : Supported 00:26:46.778 ANA Persistent Loss State : Supported 00:26:46.778 ANA Change State : Supported 00:26:46.778 ANAGRPID is not changed : No 00:26:46.778 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:46.778 00:26:46.778 ANA Group Identifier Maximum : 128 00:26:46.778 Number of ANA Group Identifiers : 128 00:26:46.778 Max Number of Allowed Namespaces : 1024 00:26:46.778 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:46.778 Command Effects Log Page: Supported 00:26:46.778 Get Log Page Extended Data: Supported 00:26:46.778 Telemetry Log Pages: Not Supported 00:26:46.778 Persistent Event Log Pages: Not Supported 00:26:46.778 Supported Log Pages Log Page: May Support 00:26:46.778 Commands Supported & Effects Log Page: Not Supported 00:26:46.779 Feature Identifiers & Effects Log Page:May Support 00:26:46.779 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.779 Data Area 4 for Telemetry Log: Not Supported 00:26:46.779 Error Log Page Entries Supported: 128 00:26:46.779 Keep Alive: Supported 00:26:46.779 Keep Alive Granularity: 1000 ms 00:26:46.779 00:26:46.779 NVM Command Set Attributes 00:26:46.779 ========================== 00:26:46.779 Submission Queue Entry Size 00:26:46.779 Max: 64 00:26:46.779 Min: 64 00:26:46.779 Completion Queue Entry Size 00:26:46.779 Max: 16 00:26:46.779 Min: 16 00:26:46.779 Number of Namespaces: 1024 00:26:46.779 Compare Command: Not Supported 00:26:46.779 Write Uncorrectable Command: Not Supported 00:26:46.779 Dataset Management Command: Supported 00:26:46.779 Write Zeroes Command: Supported 00:26:46.779 Set Features Save Field: Not Supported 00:26:46.779 Reservations: Not Supported 00:26:46.779 Timestamp: Not Supported 00:26:46.779 Copy: Not Supported 00:26:46.779 Volatile Write Cache: Present 00:26:46.779 Atomic Write Unit (Normal): 1 00:26:46.779 Atomic Write Unit (PFail): 1 00:26:46.779 Atomic Compare & Write Unit: 1 00:26:46.779 Fused Compare & Write: Not Supported 00:26:46.779 Scatter-Gather List 00:26:46.779 SGL Command Set: Supported 00:26:46.779 SGL Keyed: Not Supported 00:26:46.779 SGL Bit Bucket Descriptor: Not Supported 00:26:46.779 SGL Metadata Pointer: Not Supported 00:26:46.779 Oversized SGL: Not Supported 00:26:46.779 SGL Metadata Address: Not Supported 00:26:46.779 SGL Offset: Supported 00:26:46.779 Transport SGL Data Block: Not Supported 00:26:46.779 Replay Protected Memory Block: Not Supported 00:26:46.779 00:26:46.779 Firmware Slot Information 00:26:46.779 ========================= 00:26:46.779 Active slot: 0 00:26:46.779 00:26:46.779 Asymmetric Namespace Access 00:26:46.779 =========================== 00:26:46.779 Change Count : 0 00:26:46.779 Number of ANA Group Descriptors : 1 00:26:46.779 ANA Group Descriptor : 0 00:26:46.779 ANA Group ID : 1 00:26:46.779 Number of NSID Values : 1 00:26:46.779 Change Count : 0 00:26:46.779 ANA State : 1 00:26:46.779 Namespace Identifier : 1 00:26:46.779 00:26:46.779 Commands Supported and Effects 00:26:46.779 ============================== 00:26:46.779 Admin Commands 00:26:46.779 -------------- 00:26:46.779 Get Log Page (02h): Supported 00:26:46.779 Identify (06h): Supported 00:26:46.779 Abort (08h): Supported 00:26:46.779 Set Features (09h): Supported 00:26:46.779 Get Features (0Ah): Supported 00:26:46.779 Asynchronous Event Request (0Ch): Supported 00:26:46.779 Keep Alive (18h): Supported 00:26:46.779 I/O Commands 00:26:46.779 ------------ 00:26:46.779 Flush (00h): Supported 00:26:46.779 Write (01h): Supported LBA-Change 00:26:46.779 Read (02h): Supported 00:26:46.779 Write Zeroes (08h): Supported LBA-Change 00:26:46.779 Dataset Management (09h): Supported 00:26:46.779 00:26:46.779 Error Log 00:26:46.779 ========= 00:26:46.779 Entry: 0 00:26:46.779 Error Count: 0x3 00:26:46.779 Submission Queue Id: 0x0 00:26:46.779 Command Id: 0x5 00:26:46.779 Phase Bit: 0 00:26:46.779 Status Code: 0x2 00:26:46.779 Status Code Type: 0x0 00:26:46.779 Do Not Retry: 1 00:26:46.779 Error Location: 0x28 00:26:46.779 LBA: 0x0 00:26:46.779 Namespace: 0x0 00:26:46.779 Vendor Log Page: 0x0 00:26:46.779 ----------- 00:26:46.779 Entry: 1 00:26:46.779 Error Count: 0x2 00:26:46.779 Submission Queue Id: 0x0 00:26:46.779 Command Id: 0x5 00:26:46.779 Phase Bit: 0 00:26:46.779 Status Code: 0x2 00:26:46.779 Status Code Type: 0x0 00:26:46.779 Do Not Retry: 1 00:26:46.779 Error Location: 0x28 00:26:46.779 LBA: 0x0 00:26:46.779 Namespace: 0x0 00:26:46.779 Vendor Log Page: 0x0 00:26:46.779 ----------- 00:26:46.779 Entry: 2 00:26:46.779 Error Count: 0x1 00:26:46.779 Submission Queue Id: 0x0 00:26:46.779 Command Id: 0x4 00:26:46.779 Phase Bit: 0 00:26:46.779 Status Code: 0x2 00:26:46.779 Status Code Type: 0x0 00:26:46.779 Do Not Retry: 1 00:26:46.779 Error Location: 0x28 00:26:46.779 LBA: 0x0 00:26:46.779 Namespace: 0x0 00:26:46.779 Vendor Log Page: 0x0 00:26:46.779 00:26:46.779 Number of Queues 00:26:46.779 ================ 00:26:46.779 Number of I/O Submission Queues: 128 00:26:46.779 Number of I/O Completion Queues: 128 00:26:46.779 00:26:46.779 ZNS Specific Controller Data 00:26:46.779 ============================ 00:26:46.779 Zone Append Size Limit: 0 00:26:46.779 00:26:46.779 00:26:46.779 Active Namespaces 00:26:46.779 ================= 00:26:46.779 get_feature(0x05) failed 00:26:46.779 Namespace ID:1 00:26:46.779 Command Set Identifier: NVM (00h) 00:26:46.779 Deallocate: Supported 00:26:46.779 Deallocated/Unwritten Error: Not Supported 00:26:46.779 Deallocated Read Value: Unknown 00:26:46.779 Deallocate in Write Zeroes: Not Supported 00:26:46.779 Deallocated Guard Field: 0xFFFF 00:26:46.779 Flush: Supported 00:26:46.779 Reservation: Not Supported 00:26:46.779 Namespace Sharing Capabilities: Multiple Controllers 00:26:46.779 Size (in LBAs): 3750748848 (1788GiB) 00:26:46.779 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:46.779 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:46.779 UUID: 3e31b6a5-2cd0-4b3d-8b00-db679878cca4 00:26:46.779 Thin Provisioning: Not Supported 00:26:46.779 Per-NS Atomic Units: Yes 00:26:46.779 Atomic Write Unit (Normal): 8 00:26:46.779 Atomic Write Unit (PFail): 8 00:26:46.779 Preferred Write Granularity: 8 00:26:46.779 Atomic Compare & Write Unit: 8 00:26:46.779 Atomic Boundary Size (Normal): 0 00:26:46.779 Atomic Boundary Size (PFail): 0 00:26:46.779 Atomic Boundary Offset: 0 00:26:46.779 NGUID/EUI64 Never Reused: No 00:26:46.779 ANA group ID: 1 00:26:46.779 Namespace Write Protected: No 00:26:46.779 Number of LBA Formats: 1 00:26:46.779 Current LBA Format: LBA Format #00 00:26:46.779 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:46.779 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.779 rmmod nvme_tcp 00:26:46.779 rmmod nvme_fabrics 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.779 23:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:49.343 23:15:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:52.644 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:52.644 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:52.906 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:52.906 00:26:52.906 real 0m19.948s 00:26:52.906 user 0m5.381s 00:26:52.906 sys 0m11.681s 00:26:52.906 23:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:52.906 23:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.906 ************************************ 00:26:52.906 END TEST nvmf_identify_kernel_target 00:26:52.906 ************************************ 00:26:52.907 23:15:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:52.907 23:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:52.907 23:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:52.907 23:15:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.169 ************************************ 00:26:53.169 START TEST nvmf_auth_host 00:26:53.169 ************************************ 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:53.169 * Looking for test storage... 00:26:53.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.169 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.170 23:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:01.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.318 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:01.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:01.319 Found net devices under 0000:31:00.0: cvl_0_0 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:01.319 Found net devices under 0000:31:00.1: cvl_0_1 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.319 23:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.319 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.319 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:27:01.580 00:27:01.580 --- 10.0.0.2 ping statistics --- 00:27:01.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.580 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:27:01.580 00:27:01.580 --- 10.0.0.1 ping statistics --- 00:27:01.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.580 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1010425 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1010425 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1010425 ']' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.580 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.522 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.522 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:02.522 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.522 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.522 23:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9cadc542721602a9c287d5fc52263fe9 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.x2e 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9cadc542721602a9c287d5fc52263fe9 0 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9cadc542721602a9c287d5fc52263fe9 0 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9cadc542721602a9c287d5fc52263fe9 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.x2e 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.x2e 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x2e 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b33bf8a7ab9803b7b92b903c75bfccf63e491356427c5e4eb256e4bd2c1ac9cc 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TZI 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b33bf8a7ab9803b7b92b903c75bfccf63e491356427c5e4eb256e4bd2c1ac9cc 3 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b33bf8a7ab9803b7b92b903c75bfccf63e491356427c5e4eb256e4bd2c1ac9cc 3 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b33bf8a7ab9803b7b92b903c75bfccf63e491356427c5e4eb256e4bd2c1ac9cc 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TZI 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TZI 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.TZI 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e3ca16b4292ef64a5b955cdb9927e0e7842cb571f28cccd 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4Nw 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e3ca16b4292ef64a5b955cdb9927e0e7842cb571f28cccd 0 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e3ca16b4292ef64a5b955cdb9927e0e7842cb571f28cccd 0 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.522 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e3ca16b4292ef64a5b955cdb9927e0e7842cb571f28cccd 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4Nw 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4Nw 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4Nw 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d89487a46b8f5cb8222c3a96607136f63e7c577f64a415e5 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mjI 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d89487a46b8f5cb8222c3a96607136f63e7c577f64a415e5 2 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d89487a46b8f5cb8222c3a96607136f63e7c577f64a415e5 2 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d89487a46b8f5cb8222c3a96607136f63e7c577f64a415e5 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mjI 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mjI 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mjI 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=97b96530e6227e033dad9b517ed3e7db 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jjE 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 97b96530e6227e033dad9b517ed3e7db 1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 97b96530e6227e033dad9b517ed3e7db 1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=97b96530e6227e033dad9b517ed3e7db 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:02.523 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jjE 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jjE 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jjE 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2523a0af599b3d3c7520b5b8f375e5a1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kNI 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2523a0af599b3d3c7520b5b8f375e5a1 1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2523a0af599b3d3c7520b5b8f375e5a1 1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2523a0af599b3d3c7520b5b8f375e5a1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kNI 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kNI 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kNI 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2144ee3cedb2944c7dcced9102185a36f603f1d242fbcb05 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.S6L 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2144ee3cedb2944c7dcced9102185a36f603f1d242fbcb05 2 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2144ee3cedb2944c7dcced9102185a36f603f1d242fbcb05 2 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2144ee3cedb2944c7dcced9102185a36f603f1d242fbcb05 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.S6L 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.S6L 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.S6L 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=438b485f3ecafb76188ee061e05b9123 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.A4p 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 438b485f3ecafb76188ee061e05b9123 0 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 438b485f3ecafb76188ee061e05b9123 0 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=438b485f3ecafb76188ee061e05b9123 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.A4p 00:27:02.785 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.A4p 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.A4p 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3b4c99d72014cccc9d6ffd7c0168c834bae9a22a63efbe7d0b40b2f14ea4a8fa 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qZx 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3b4c99d72014cccc9d6ffd7c0168c834bae9a22a63efbe7d0b40b2f14ea4a8fa 3 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3b4c99d72014cccc9d6ffd7c0168c834bae9a22a63efbe7d0b40b2f14ea4a8fa 3 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3b4c99d72014cccc9d6ffd7c0168c834bae9a22a63efbe7d0b40b2f14ea4a8fa 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:02.786 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qZx 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qZx 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qZx 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1010425 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1010425 ']' 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x2e 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.TZI ]] 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TZI 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4Nw 00:27:03.047 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mjI ]] 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mjI 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jjE 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kNI ]] 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kNI 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.048 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.S6L 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.A4p ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A4p 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qZx 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:03.307 23:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.509 Waiting for block devices as requested 00:27:07.509 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.509 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:07.509 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.770 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.770 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.770 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.770 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.030 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.030 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.030 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:08.971 No valid GPT data, bailing 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:27:08.971 00:27:08.971 Discovery Log Number of Records 2, Generation counter 2 00:27:08.971 =====Discovery Log Entry 0====== 00:27:08.971 trtype: tcp 00:27:08.971 adrfam: ipv4 00:27:08.971 subtype: current discovery subsystem 00:27:08.971 treq: not specified, sq flow control disable supported 00:27:08.971 portid: 1 00:27:08.971 trsvcid: 4420 00:27:08.971 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:08.971 traddr: 10.0.0.1 00:27:08.971 eflags: none 00:27:08.971 sectype: none 00:27:08.971 =====Discovery Log Entry 1====== 00:27:08.971 trtype: tcp 00:27:08.971 adrfam: ipv4 00:27:08.971 subtype: nvme subsystem 00:27:08.971 treq: not specified, sq flow control disable supported 00:27:08.971 portid: 1 00:27:08.971 trsvcid: 4420 00:27:08.971 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:08.971 traddr: 10.0.0.1 00:27:08.971 eflags: none 00:27:08.971 sectype: none 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.971 nvme0n1 00:27:08.971 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.972 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.232 nvme0n1 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.232 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.233 23:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.494 nvme0n1 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.494 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.756 nvme0n1 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.756 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.757 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.017 nvme0n1 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.017 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.018 nvme0n1 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.018 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:10.278 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.279 23:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.279 nvme0n1 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.279 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.540 nvme0n1 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.540 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.802 nvme0n1 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.802 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 nvme0n1 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.325 23:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 nvme0n1 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.586 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.847 nvme0n1 00:27:11.847 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.848 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.110 nvme0n1 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.110 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.111 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.111 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.111 23:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.372 nvme0n1 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.372 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.632 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.892 nvme0n1 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.892 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.893 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.154 nvme0n1 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.154 23:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.725 nvme0n1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.725 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 nvme0n1 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.296 23:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.867 nvme0n1 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.867 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 nvme0n1 00:27:15.438 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.438 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.438 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.438 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.439 23:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.439 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.699 nvme0n1 00:27:15.699 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.699 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.699 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.699 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.700 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.700 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.960 23:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.587 nvme0n1 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.587 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.588 23:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.529 nvme0n1 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.529 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.530 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.470 nvme0n1 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:18.470 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.471 23:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.042 nvme0n1 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.042 23:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.984 nvme0n1 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.984 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.985 nvme0n1 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.985 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.245 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.246 nvme0n1 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.246 23:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.246 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 nvme0n1 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.507 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.769 nvme0n1 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.769 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.030 nvme0n1 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.030 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.291 nvme0n1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.291 23:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.552 nvme0n1 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.552 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.553 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.553 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.553 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.553 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.553 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.814 nvme0n1 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.814 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.815 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.075 nvme0n1 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.075 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.076 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.336 nvme0n1 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.336 23:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.597 nvme0n1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.597 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.857 nvme0n1 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.857 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.117 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.378 nvme0n1 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.378 23:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.378 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 nvme0n1 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.639 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.901 nvme0n1 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.901 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.161 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.162 23:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.423 nvme0n1 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.423 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.684 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.944 nvme0n1 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.944 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.204 23:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.464 nvme0n1 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.464 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.725 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.986 nvme0n1 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.986 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.247 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.248 23:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.509 nvme0n1 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:26.509 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.769 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.770 23:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 nvme0n1 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.340 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.601 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.601 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.601 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.173 nvme0n1 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.173 23:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.114 nvme0n1 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.114 23:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.056 nvme0n1 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.057 23:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.628 nvme0n1 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.628 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.629 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 nvme0n1 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.890 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.891 nvme0n1 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.891 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.152 nvme0n1 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.152 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.413 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.414 nvme0n1 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.414 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.675 nvme0n1 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.675 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.676 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 nvme0n1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.937 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.198 nvme0n1 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.198 23:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 nvme0n1 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.459 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.460 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.721 nvme0n1 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.721 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 nvme0n1 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.982 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.243 nvme0n1 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.243 23:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.243 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.244 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.504 nvme0n1 00:27:33.504 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.766 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.027 nvme0n1 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.027 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.292 nvme0n1 00:27:34.292 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.292 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.292 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.292 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.292 23:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.292 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.553 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.553 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.553 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.553 nvme0n1 00:27:34.553 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.814 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.433 nvme0n1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.433 23:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 nvme0n1 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.695 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.265 nvme0n1 00:27:36.265 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.265 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.265 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.266 23:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.266 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.837 nvme0n1 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.837 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.838 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.408 nvme0n1 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.408 23:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNhZGM1NDI3MjE2MDJhOWMyODdkNWZjNTIyNjNmZTmJL61O: 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjMzYmY4YTdhYjk4MDNiN2I5MmI5MDNjNzViZmNjZjYzZTQ5MTM1NjQyN2M1ZTRlYjI1NmU0YmQyYzFhYzljY2T0oTE=: 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.408 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.349 nvme0n1 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.349 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.350 23:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.920 nvme0n1 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdiOTY1MzBlNjIyN2UwMzNkYWQ5YjUxN2VkM2U3ZGL3pMTg: 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjUyM2EwYWY1OTliM2QzYzc1MjBiNWI4ZjM3NWU1YTEQ22G6: 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.920 23:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 nvme0n1 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE0NGVlM2NlZGIyOTQ0YzdkY2NlZDkxMDIxODVhMzZmNjAzZjFkMjQyZmJjYjA1efvgnA==: 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDM4YjQ4NWYzZWNhZmI3NjE4OGVlMDYxZTA1YjkxMjP+m47t: 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.861 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.862 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.862 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.862 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.862 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.862 23:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.802 nvme0n1 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2I0Yzk5ZDcyMDE0Y2NjYzlkNmZmZDdjMDE2OGM4MzRiYWU5YTIyYTYzZWZiZTdkMGI0MGIyZjE0ZWE0YThmYRzP5Y8=: 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.802 23:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.374 nvme0n1 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUzY2ExNmI0MjkyZWY2NGE1Yjk1NWNkYjk5MjdlMGU3ODQyY2I1NzFmMjhjY2NkivrYWg==: 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg5NDg3YTQ2YjhmNWNiODIyMmMzYTk2NjA3MTM2ZjYzZTdjNTc3ZjY0YTQxNWU1zJDBuw==: 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.374 request: 00:27:41.374 { 00:27:41.374 "name": "nvme0", 00:27:41.374 "trtype": "tcp", 00:27:41.374 "traddr": "10.0.0.1", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "4420", 00:27:41.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.374 "prchk_reftag": false, 00:27:41.374 "prchk_guard": false, 00:27:41.374 "hdgst": false, 00:27:41.374 "ddgst": false, 00:27:41.374 "method": "bdev_nvme_attach_controller", 00:27:41.374 "req_id": 1 00:27:41.374 } 00:27:41.374 Got JSON-RPC error response 00:27:41.374 response: 00:27:41.374 { 00:27:41.374 "code": -5, 00:27:41.374 "message": "Input/output error" 00:27:41.374 } 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.374 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.636 request: 00:27:41.636 { 00:27:41.636 "name": "nvme0", 00:27:41.636 "trtype": "tcp", 00:27:41.636 "traddr": "10.0.0.1", 00:27:41.636 "adrfam": "ipv4", 00:27:41.636 "trsvcid": "4420", 00:27:41.636 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.636 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.636 "prchk_reftag": false, 00:27:41.636 "prchk_guard": false, 00:27:41.636 "hdgst": false, 00:27:41.636 "ddgst": false, 00:27:41.636 "dhchap_key": "key2", 00:27:41.636 "method": "bdev_nvme_attach_controller", 00:27:41.636 "req_id": 1 00:27:41.636 } 00:27:41.636 Got JSON-RPC error response 00:27:41.636 response: 00:27:41.636 { 00:27:41.636 "code": -5, 00:27:41.636 "message": "Input/output error" 00:27:41.636 } 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.636 request: 00:27:41.636 { 00:27:41.636 "name": "nvme0", 00:27:41.636 "trtype": "tcp", 00:27:41.636 "traddr": "10.0.0.1", 00:27:41.636 "adrfam": "ipv4", 00:27:41.636 "trsvcid": "4420", 00:27:41.636 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.636 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.636 "prchk_reftag": false, 00:27:41.636 "prchk_guard": false, 00:27:41.636 "hdgst": false, 00:27:41.636 "ddgst": false, 00:27:41.636 "dhchap_key": "key1", 00:27:41.636 "dhchap_ctrlr_key": "ckey2", 00:27:41.636 "method": "bdev_nvme_attach_controller", 00:27:41.636 "req_id": 1 00:27:41.636 } 00:27:41.636 Got JSON-RPC error response 00:27:41.636 response: 00:27:41.636 { 00:27:41.636 "code": -5, 00:27:41.636 "message": "Input/output error" 00:27:41.636 } 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.636 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.637 rmmod nvme_tcp 00:27:41.637 rmmod nvme_fabrics 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1010425 ']' 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1010425 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1010425 ']' 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1010425 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.637 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1010425 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1010425' 00:27:41.897 killing process with pid 1010425 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1010425 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1010425 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.897 23:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:44.439 23:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:47.739 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:47.739 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:47.999 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:47.999 23:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x2e /tmp/spdk.key-null.4Nw /tmp/spdk.key-sha256.jjE /tmp/spdk.key-sha384.S6L /tmp/spdk.key-sha512.qZx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:47.999 23:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.208 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:52.208 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:52.208 00:27:52.208 real 0m58.888s 00:27:52.208 user 0m51.717s 00:27:52.208 sys 0m16.253s 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.208 ************************************ 00:27:52.208 END TEST nvmf_auth_host 00:27:52.208 ************************************ 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.208 ************************************ 00:27:52.208 START TEST nvmf_digest 00:27:52.208 ************************************ 00:27:52.208 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.208 * Looking for test storage... 00:27:52.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.209 23:16:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:00.350 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:00.350 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:00.350 Found net devices under 0000:31:00.0: cvl_0_0 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:00.350 Found net devices under 0000:31:00.1: cvl_0_1 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:28:00.350 00:28:00.350 --- 10.0.0.2 ping statistics --- 00:28:00.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.350 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:28:00.350 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:28:00.351 00:28:00.351 --- 10.0.0.1 ping statistics --- 00:28:00.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.351 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.351 ************************************ 00:28:00.351 START TEST nvmf_digest_clean 00:28:00.351 ************************************ 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1027638 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1027638 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1027638 ']' 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.351 23:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.351 [2024-07-24 23:16:17.711128] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:00.351 [2024-07-24 23:16:17.711184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.351 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.351 [2024-07-24 23:16:17.789259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.351 [2024-07-24 23:16:17.862822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.351 [2024-07-24 23:16:17.862859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.351 [2024-07-24 23:16:17.862867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.351 [2024-07-24 23:16:17.862873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.351 [2024-07-24 23:16:17.862879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.351 [2024-07-24 23:16:17.862897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.922 null0 00:28:00.922 [2024-07-24 23:16:18.593008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.922 [2024-07-24 23:16:18.617211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1027819 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1027819 /var/tmp/bperf.sock 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1027819 ']' 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.922 23:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.922 [2024-07-24 23:16:18.669793] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:00.922 [2024-07-24 23:16:18.669838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027819 ] 00:28:00.922 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.183 [2024-07-24 23:16:18.753570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.183 [2024-07-24 23:16:18.817409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.755 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.755 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:01.755 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:01.755 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:01.755 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.015 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.015 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.276 nvme0n1 00:28:02.276 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.276 23:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.276 Running I/O for 2 seconds... 00:28:04.821 00:28:04.821 Latency(us) 00:28:04.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.821 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:04.821 nvme0n1 : 2.00 20642.12 80.63 0.00 0.00 6193.64 2894.51 14308.69 00:28:04.821 =================================================================================================================== 00:28:04.821 Total : 20642.12 80.63 0.00 0.00 6193.64 2894.51 14308.69 00:28:04.821 0 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:04.821 | select(.opcode=="crc32c") 00:28:04.821 | "\(.module_name) \(.executed)"' 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1027819 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1027819 ']' 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1027819 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1027819 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1027819' 00:28:04.821 killing process with pid 1027819 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1027819 00:28:04.821 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.821 00:28:04.821 Latency(us) 00:28:04.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.821 =================================================================================================================== 00:28:04.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1027819 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:04.821 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1028501 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1028501 /var/tmp/bperf.sock 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1028501 ']' 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.822 23:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.822 [2024-07-24 23:16:22.420883] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:04.822 [2024-07-24 23:16:22.420940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028501 ] 00:28:04.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.822 Zero copy mechanism will not be used. 00:28:04.822 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.822 [2024-07-24 23:16:22.502295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.822 [2024-07-24 23:16:22.566260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.394 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.394 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:05.394 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:05.394 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:05.394 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:05.656 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.656 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.973 nvme0n1 00:28:05.973 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:05.973 23:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.973 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.973 Zero copy mechanism will not be used. 00:28:05.973 Running I/O for 2 seconds... 00:28:08.531 00:28:08.531 Latency(us) 00:28:08.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.531 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:08.531 nvme0n1 : 2.00 2700.16 337.52 0.00 0.00 5922.69 1419.95 8847.36 00:28:08.531 =================================================================================================================== 00:28:08.531 Total : 2700.16 337.52 0.00 0.00 5922.69 1419.95 8847.36 00:28:08.531 0 00:28:08.531 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.531 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.531 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.531 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.531 | select(.opcode=="crc32c") 00:28:08.531 | "\(.module_name) \(.executed)"' 00:28:08.531 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1028501 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1028501 ']' 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1028501 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028501 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028501' 00:28:08.532 killing process with pid 1028501 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1028501 00:28:08.532 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.532 00:28:08.532 Latency(us) 00:28:08.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.532 =================================================================================================================== 00:28:08.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.532 23:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1028501 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1029213 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1029213 /var/tmp/bperf.sock 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1029213 ']' 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.532 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.532 [2024-07-24 23:16:26.109792] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:08.532 [2024-07-24 23:16:26.109846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029213 ] 00:28:08.532 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.532 [2024-07-24 23:16:26.189985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.532 [2024-07-24 23:16:26.244631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.102 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.102 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:09.102 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.102 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.102 23:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.362 23:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.362 23:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.622 nvme0n1 00:28:09.622 23:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.622 23:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.622 Running I/O for 2 seconds... 00:28:12.161 00:28:12.161 Latency(us) 00:28:12.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.161 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.161 nvme0n1 : 2.01 21300.61 83.21 0.00 0.00 5997.80 5352.11 14745.60 00:28:12.161 =================================================================================================================== 00:28:12.161 Total : 21300.61 83.21 0.00 0.00 5997.80 5352.11 14745.60 00:28:12.161 0 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:12.161 | select(.opcode=="crc32c") 00:28:12.161 | "\(.module_name) \(.executed)"' 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1029213 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1029213 ']' 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1029213 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029213 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029213' 00:28:12.161 killing process with pid 1029213 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1029213 00:28:12.161 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.161 00:28:12.161 Latency(us) 00:28:12.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.161 =================================================================================================================== 00:28:12.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1029213 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1029999 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1029999 /var/tmp/bperf.sock 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1029999 ']' 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.161 23:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.161 [2024-07-24 23:16:29.825239] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:12.161 [2024-07-24 23:16:29.825297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029999 ] 00:28:12.161 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.161 Zero copy mechanism will not be used. 00:28:12.161 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.161 [2024-07-24 23:16:29.907861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.421 [2024-07-24 23:16:29.961258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.992 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.992 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:12.992 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:12.992 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:12.992 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.252 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.252 23:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.513 nvme0n1 00:28:13.513 23:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:13.513 23:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.513 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.513 Zero copy mechanism will not be used. 00:28:13.513 Running I/O for 2 seconds... 00:28:15.426 00:28:15.426 Latency(us) 00:28:15.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.426 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:15.426 nvme0n1 : 2.00 3971.29 496.41 0.00 0.00 4024.45 2198.19 13707.95 00:28:15.426 =================================================================================================================== 00:28:15.426 Total : 3971.29 496.41 0.00 0.00 4024.45 2198.19 13707.95 00:28:15.426 0 00:28:15.426 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:15.426 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:15.426 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:15.426 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:15.426 | select(.opcode=="crc32c") 00:28:15.426 | "\(.module_name) \(.executed)"' 00:28:15.426 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1029999 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1029999 ']' 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1029999 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029999 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029999' 00:28:15.687 killing process with pid 1029999 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1029999 00:28:15.687 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.687 00:28:15.687 Latency(us) 00:28:15.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.687 =================================================================================================================== 00:28:15.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.687 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1029999 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1027638 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1027638 ']' 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1027638 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1027638 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1027638' 00:28:15.947 killing process with pid 1027638 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1027638 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1027638 00:28:15.947 00:28:15.947 real 0m16.081s 00:28:15.947 user 0m31.379s 00:28:15.947 sys 0m3.378s 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.947 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.947 ************************************ 00:28:15.947 END TEST nvmf_digest_clean 00:28:15.947 ************************************ 00:28:16.207 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:16.207 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.208 ************************************ 00:28:16.208 START TEST nvmf_digest_error 00:28:16.208 ************************************ 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1030896 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1030896 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1030896 ']' 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.208 23:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.208 [2024-07-24 23:16:33.866650] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:16.208 [2024-07-24 23:16:33.866704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.208 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.208 [2024-07-24 23:16:33.942253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.468 [2024-07-24 23:16:34.015791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.468 [2024-07-24 23:16:34.015829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.468 [2024-07-24 23:16:34.015837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.468 [2024-07-24 23:16:34.015843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.468 [2024-07-24 23:16:34.015849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.468 [2024-07-24 23:16:34.015868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.039 [2024-07-24 23:16:34.673777] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.039 null0 00:28:17.039 [2024-07-24 23:16:34.754134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.039 [2024-07-24 23:16:34.778351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1030951 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1030951 /var/tmp/bperf.sock 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1030951 ']' 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.039 23:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.300 [2024-07-24 23:16:34.834912] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:17.300 [2024-07-24 23:16:34.834959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030951 ] 00:28:17.300 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.300 [2024-07-24 23:16:34.917192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.300 [2024-07-24 23:16:34.971004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.871 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.871 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.871 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.871 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.131 23:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.392 nvme0n1 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:18.392 23:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.652 Running I/O for 2 seconds... 00:28:18.652 [2024-07-24 23:16:36.204455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.652 [2024-07-24 23:16:36.204485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.652 [2024-07-24 23:16:36.204493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.652 [2024-07-24 23:16:36.217812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.652 [2024-07-24 23:16:36.217830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.652 [2024-07-24 23:16:36.217837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.230936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.230954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.243536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.243561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.255364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.255381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.255388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.268132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.268149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.268156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.279261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.279279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.279286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.293308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.293326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.293332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.304726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.304742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.304748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.315072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.315088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.315095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.329759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.329776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.329782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.341532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.341549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.353827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.353844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.353849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.366943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.366960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.366966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.379585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.379604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.391799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.391815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.402261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.402278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.402284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.414729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.414746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.414755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.653 [2024-07-24 23:16:36.428309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.653 [2024-07-24 23:16:36.428326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.653 [2024-07-24 23:16:36.428332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.914 [2024-07-24 23:16:36.440425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.440443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.440449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.453442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.453459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.453466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.465603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.465620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.465626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.478520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.478538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.478544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.488338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.488357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.488363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.501783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.501800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.501806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.514783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.514801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.514807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.528559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.528577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.528583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.540367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.540383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.550626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.550649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.563938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.563955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.563961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.577550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.577567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.577574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.589196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.589214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.589223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.601642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.601659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.613907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.613924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.613931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.626419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.626435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.626441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.638465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.638482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.638488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.648526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.648544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.648550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.662377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.662400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.675113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.675131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.675137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.915 [2024-07-24 23:16:36.688505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:18.915 [2024-07-24 23:16:36.688522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.915 [2024-07-24 23:16:36.688528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.701938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.701959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.701965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.715245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.715261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.725758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.725774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.725781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.738884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.738907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.752612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.752629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.752635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.762063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.762080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.762086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.774228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.774246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.774252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.787221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.787238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.787244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.800095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.800112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.814041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.814058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.814064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.825105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.825121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.825127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.837778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.837794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.837800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.850227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.850244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.850251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.863372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.863389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.863395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.875006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.875023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.875029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.888639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.888657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.888663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.900944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.900967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.910762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.910779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.910788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.925130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.925147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.925153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.937477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.937494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.937500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.948503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.948520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.177 [2024-07-24 23:16:36.948525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.177 [2024-07-24 23:16:36.959737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.177 [2024-07-24 23:16:36.959758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.178 [2024-07-24 23:16:36.959764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.438 [2024-07-24 23:16:36.973338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.438 [2024-07-24 23:16:36.973355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-24 23:16:36.973361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.438 [2024-07-24 23:16:36.986140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.438 [2024-07-24 23:16:36.986157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.438 [2024-07-24 23:16:36.986163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.438 [2024-07-24 23:16:36.999169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:36.999186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:36.999192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.011814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.011831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.011838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.022065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.022082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.034794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.034811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.034817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.047564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.047582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.058915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.058932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.058939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.072691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.072714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.084671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.084694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.095146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.095163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.095169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.107499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.107515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.107522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.120787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.120814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.133813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.133830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.133836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.142997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.143014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.143020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.157479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.157495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.169011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.169034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.182736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.182757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.182763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.195046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.195063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.195069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.206514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.206530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.206536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.439 [2024-07-24 23:16:37.218270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.439 [2024-07-24 23:16:37.218286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.439 [2024-07-24 23:16:37.218292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.231159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.231181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.231187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.243647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.243664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.243670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.255081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.255097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.255103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.268005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.268021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.268027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.279499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.279515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.279521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.292604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.292620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.700 [2024-07-24 23:16:37.292627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.700 [2024-07-24 23:16:37.305225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.700 [2024-07-24 23:16:37.305241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.305248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.318560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.318577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.318583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.330283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.330299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.330305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.342312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.342329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.342335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.354230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.354247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.354253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.365063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.365079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.365085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.377510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.377533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.390858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.390875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.390881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.403627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.403644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.403650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.417428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.417444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.417450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.429197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.429213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.429220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.439159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.439176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.439185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.452545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.452562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.452568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.465883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.465900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.465906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.701 [2024-07-24 23:16:37.478049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.701 [2024-07-24 23:16:37.478066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.701 [2024-07-24 23:16:37.478072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.490810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.490827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.490834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.502564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.502580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.502586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.513062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.513079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.513085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.525971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.525988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.525994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.540703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.540720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.552606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.552622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.552628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.564254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.564270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.564277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.575919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.575936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.575942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.590059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.590075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.590081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.601683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.601699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.601705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.614008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.614024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.614030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.626552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.626570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.639223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.639240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.639246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.650962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.650979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.650988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.662271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.662287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.662293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.675621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.675638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.675644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.687362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.687378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.687384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.698269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.698285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.698291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.711644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.711660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.725376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.725393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.725399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.737756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.737773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.737779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.963 [2024-07-24 23:16:37.747390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:19.963 [2024-07-24 23:16:37.747406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.963 [2024-07-24 23:16:37.747413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.761369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.761389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.761396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.774020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.774037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.774043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.786898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.786914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.786920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.798469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.798487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.798493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.810124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.810141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.810147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.822012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.822028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.834777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.834793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.834799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.847865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.847882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.847888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.860958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.860975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.874144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.874160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.874166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.885754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.885769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.885775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.896151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.896168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.896174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.909145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.909161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.909167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.923125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.923141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.923147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.935347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.935363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.935369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.945798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.945814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.945820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.959169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.959186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.959192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.970757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.970773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.970782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.984726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.984744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.984755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:37.996479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:37.996496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:37.996502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.225 [2024-07-24 23:16:38.007627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.225 [2024-07-24 23:16:38.007644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.225 [2024-07-24 23:16:38.007650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.020831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.020848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.020854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.032367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.032383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.032389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.046009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.046026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.046032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.057191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.057207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.057213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.069976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.069992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.069998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.080134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.080151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.080157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.093294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.093311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.093317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.105002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.105024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.118137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.118155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.118161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.130405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.130422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.130427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.142222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.142238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.142244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.156040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.156058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.156064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.168736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.168757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.168764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.180375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.180391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.180400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 [2024-07-24 23:16:38.191717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d64b40) 00:28:20.487 [2024-07-24 23:16:38.191734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.487 [2024-07-24 23:16:38.191740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.487 00:28:20.487 Latency(us) 00:28:20.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.487 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:20.487 nvme0n1 : 2.04 20247.99 79.09 0.00 0.00 6190.48 2280.11 48059.73 00:28:20.487 =================================================================================================================== 00:28:20.487 Total : 20247.99 79.09 0.00 0.00 6190.48 2280.11 48059.73 00:28:20.487 0 00:28:20.487 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:20.487 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:20.487 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:20.487 | .driver_specific 00:28:20.487 | .nvme_error 00:28:20.487 | .status_code 00:28:20.487 | .command_transient_transport_error' 00:28:20.487 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1030951 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1030951 ']' 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1030951 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030951 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030951' 00:28:20.748 killing process with pid 1030951 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1030951 00:28:20.748 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.748 00:28:20.748 Latency(us) 00:28:20.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.748 =================================================================================================================== 00:28:20.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.748 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1030951 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1031777 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1031777 /var/tmp/bperf.sock 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1031777 ']' 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.010 23:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.010 [2024-07-24 23:16:38.649853] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:21.010 [2024-07-24 23:16:38.649907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031777 ] 00:28:21.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.010 Zero copy mechanism will not be used. 00:28:21.010 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.010 [2024-07-24 23:16:38.732184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.010 [2024-07-24 23:16:38.785649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.952 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.213 nvme0n1 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.213 23:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.474 Zero copy mechanism will not be used. 00:28:22.474 Running I/O for 2 seconds... 00:28:22.474 [2024-07-24 23:16:40.095434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.095478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.106566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.106589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.118585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.118604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.118611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.129456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.129474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.129480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.140119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.140136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.140143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.151285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.151303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.151310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.161711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.161729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.161735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.173718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.173741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.173747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.186169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.186187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.186194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.198947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.198966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.198972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.211345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.211363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.211370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.221215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.221233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.221240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.234032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.234051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.234058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.243366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.243384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.243390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.474 [2024-07-24 23:16:40.254323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.474 [2024-07-24 23:16:40.254341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.474 [2024-07-24 23:16:40.254347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.265418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.265436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.265448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.277509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.277528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.277534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.288199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.288216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.288222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.297459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.297477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.297484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.307484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.307502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.307509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.316095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.316113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.316119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.324210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.324227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.332975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.332993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.333000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.342870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.342887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.342893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.355210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.355230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.355237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.367578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.367596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.367602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.376966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.376984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.376990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.387868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.387885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.387891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.396441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.396460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.396466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.408852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.408870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.408877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.420606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.420623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.420629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.430181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.430205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.440328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.440346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.440353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.451786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.451804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.451811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.464788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.464806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.464812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.475434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.475452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.475458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.488200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.488218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.488224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.498766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.498785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.498791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.736 [2024-07-24 23:16:40.511599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.736 [2024-07-24 23:16:40.511618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.736 [2024-07-24 23:16:40.511625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.521581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.521602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.521608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.533786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.533805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.544693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.544711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.544720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.556186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.556206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.556212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.566947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.566966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.566972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.580672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.580690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.594248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.594266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.594273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.606933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.606952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.606958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.618736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.618760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.618767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.629972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.629991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.629997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.641117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.641136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.641142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.651373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.651396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.651402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.663046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.663065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.663071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.673078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.673097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.685143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.685162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.698673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.698692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.698698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.712080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.712099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.712106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.726238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.726256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.726262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.739283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.739302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.751977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.751995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.998 [2024-07-24 23:16:40.752002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.998 [2024-07-24 23:16:40.764964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.998 [2024-07-24 23:16:40.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.999 [2024-07-24 23:16:40.764988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.999 [2024-07-24 23:16:40.778729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:22.999 [2024-07-24 23:16:40.778746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.999 [2024-07-24 23:16:40.778757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.791675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.791693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.791699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.804180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.804199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.804205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.818536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.818561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.830591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.830609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.830615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.841564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.841581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.841587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.852231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.852248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.852254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.863271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.863289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.863298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.874532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.874550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.874556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.887620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.887638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.887644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.260 [2024-07-24 23:16:40.899150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.260 [2024-07-24 23:16:40.899168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.260 [2024-07-24 23:16:40.899174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.911502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.911519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.911525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.923421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.923439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.923445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.934864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.934882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.934888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.946600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.946618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.957643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.957661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.957668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.969876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.969894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.969900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.981209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.981227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:40.993369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:40.993387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:40.993393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:41.005649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:41.005667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:41.005673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:41.016861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:41.016879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:41.016886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:41.028959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:41.028977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:41.028983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.261 [2024-07-24 23:16:41.041263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.261 [2024-07-24 23:16:41.041281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.261 [2024-07-24 23:16:41.041288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.052567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.052590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.064614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.064632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.064642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.076611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.076629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.076636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.088412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.088430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.088436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.100522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.100540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.100547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.112130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.112148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.112154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.123896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.123913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.123919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.136183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.136201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.136207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.146820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.146838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.146844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.159210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.159229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.159235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.522 [2024-07-24 23:16:41.170844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.522 [2024-07-24 23:16:41.170866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.522 [2024-07-24 23:16:41.170872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.182642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.182660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.182665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.194355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.194373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.194379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.206445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.206469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.218234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.218252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.218258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.231362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.231380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.231386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.244243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.244261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.244267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.254680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.254705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.266129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.266148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.266154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.276898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.276916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.276923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.288685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.288704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.288710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.523 [2024-07-24 23:16:41.300947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.523 [2024-07-24 23:16:41.300965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.523 [2024-07-24 23:16:41.300972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.314662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.314681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.314687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.327039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.327057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.327064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.338457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.338475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.338481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.350314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.350331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.350338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.360945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.360963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.360969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.373399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.373417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.373426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.384309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.384327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.384333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.395736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.395758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.395764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.407208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.407232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.420023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.420041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.420047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.431995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.432013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.432019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.443868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.443891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.456634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.784 [2024-07-24 23:16:41.456651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.784 [2024-07-24 23:16:41.456657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.784 [2024-07-24 23:16:41.467905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.467923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.467930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.479518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.479540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.479546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.489597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.489621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.501008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.501026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.501032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.510659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.510683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.524190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.524208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.524214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.536700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.536719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.536724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.549006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.549025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.549031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.785 [2024-07-24 23:16:41.562325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:23.785 [2024-07-24 23:16:41.562343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.785 [2024-07-24 23:16:41.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.574017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.574036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.586444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.586462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.586469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.596142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.596160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.596166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.606432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.606450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.616700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.616719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.629263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.629281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.629288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.640787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.640805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.640811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.652174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.652199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.663890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.663909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.675801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.675819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.675828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.686643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.686661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.686668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.697380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.697398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.697404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.709244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.709263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.709269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.721192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.721209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.721215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.732870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.732888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.732895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.743705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.743723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.743730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.755602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.755619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.767578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.767597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.767603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.779605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.779624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.779630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.790705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.790723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.790729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.800377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.800394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.800400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.812467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.812485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.812492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.047 [2024-07-24 23:16:41.824358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.047 [2024-07-24 23:16:41.824375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.047 [2024-07-24 23:16:41.824381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.835455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.835473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.835479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.846778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.846796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.846803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.857907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.857926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.857932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.868184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.868203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.868212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.880749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.880771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.880777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.891571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.891589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.891595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.904647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.904666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.904672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.917109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.917129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.917135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.928902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.928921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.941315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.941333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.941340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.952790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.309 [2024-07-24 23:16:41.952808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.309 [2024-07-24 23:16:41.952814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.309 [2024-07-24 23:16:41.964449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:41.964468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:41.964474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:41.975491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:41.975512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:41.975518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:41.986854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:41.986873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:41.986879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:41.997467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:41.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:41.997491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.008160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.008179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.008185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.019871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.019890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.019896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.031498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.031517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.031523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.043913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.043932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.043938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.055287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.055306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.055311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.068087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.068105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.068112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.310 [2024-07-24 23:16:42.080364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f807d0) 00:28:24.310 [2024-07-24 23:16:42.080383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.310 [2024-07-24 23:16:42.080389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.310 00:28:24.310 Latency(us) 00:28:24.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.310 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:24.310 nvme0n1 : 2.00 2673.82 334.23 0.00 0.00 5981.11 1604.27 14090.24 00:28:24.310 =================================================================================================================== 00:28:24.310 Total : 2673.82 334.23 0.00 0.00 5981.11 1604.27 14090.24 00:28:24.310 0 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.571 | .driver_specific 00:28:24.571 | .nvme_error 00:28:24.571 | .status_code 00:28:24.571 | .command_transient_transport_error' 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1031777 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1031777 ']' 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1031777 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1031777 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1031777' 00:28:24.571 killing process with pid 1031777 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1031777 00:28:24.571 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.571 00:28:24.571 Latency(us) 00:28:24.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.571 =================================================================================================================== 00:28:24.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.571 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1031777 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1032596 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1032596 /var/tmp/bperf.sock 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1032596 ']' 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.831 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.832 23:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.832 [2024-07-24 23:16:42.468747] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:24.832 [2024-07-24 23:16:42.468812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032596 ] 00:28:24.832 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.832 [2024-07-24 23:16:42.549002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.832 [2024-07-24 23:16:42.602766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.772 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.032 nvme0n1 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.032 23:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.032 Running I/O for 2 seconds... 00:28:26.032 [2024-07-24 23:16:43.737367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.737762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.737788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.749560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.749897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.749915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.761699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.762092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.762109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.773867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.774249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.774265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.786001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.786381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.786397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.798148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.798547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.798563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.032 [2024-07-24 23:16:43.810306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.032 [2024-07-24 23:16:43.810694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.032 [2024-07-24 23:16:43.810710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.822420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.822667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.822683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.834502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.834847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.834862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.846620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.846864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.858759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.859182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.859197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.870933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.871282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.871298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.883068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.883443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.883458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.895164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.895580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.895595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.907233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.907665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.907680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.919331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.919747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.919765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.931390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.931735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.931756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.943484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.943901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.943916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.955730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.956155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.967805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.968074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.979902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.980328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.980343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:43.992013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:43.992414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:43.992429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.004106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.004555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.004569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.016238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.016656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.016671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.028292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.028665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.028681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.040417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.040839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.040854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.052468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.052804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.052819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.064623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.065083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.065098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.294 [2024-07-24 23:16:44.076818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.294 [2024-07-24 23:16:44.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.294 [2024-07-24 23:16:44.077219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.088862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.089228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.089243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.100940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.101333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.101349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.113055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.113495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.125171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.125581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.137263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.137513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.137527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.149373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.149784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.161635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.161885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.161900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.173740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.174106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.174121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.185793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.186210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.186226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.197838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.198223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.198237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.209950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.210351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.210365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.222018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.222268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.222283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.234125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.234543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.234557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.246257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.246631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.246646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.258413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.258795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.258810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.270552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.270990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.271005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.282629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.283016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.283031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.294724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.295142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.295157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.306900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.307131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.307146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.319313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.319678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.319694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.559 [2024-07-24 23:16:44.331434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.559 [2024-07-24 23:16:44.331677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.559 [2024-07-24 23:16:44.331697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.343503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.343943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.343958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.355639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.356093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.356111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.367725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.368080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.368095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.379865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.380248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.380263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.392057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.392293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.392307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.404164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.404587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.404602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.416279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.416750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.416767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.428468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.428712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.428726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.440655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.440991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.441006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.452735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.453083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.453098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.464785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.465154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.465169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.476889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.477330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.477345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.488951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.489202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.501060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.501401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.501416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.513165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.852 [2024-07-24 23:16:44.513575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.852 [2024-07-24 23:16:44.513590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.852 [2024-07-24 23:16:44.525401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.525816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.525832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.537470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.537926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.549582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.549994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.550009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.561671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.562086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.562101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.573734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.574227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.586002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.586422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.586437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.598060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.598501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.598516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.853 [2024-07-24 23:16:44.610154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:26.853 [2024-07-24 23:16:44.610534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.853 [2024-07-24 23:16:44.610549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.622232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.622594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.622609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.634383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.634837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.634852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.646484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.646731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.646746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.658585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.658950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.658965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.670672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.671061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.671079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.682782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.683136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.683151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.694963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.695348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.695362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.707015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.118 [2024-07-24 23:16:44.707246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.118 [2024-07-24 23:16:44.707260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.118 [2024-07-24 23:16:44.719111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.719563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.719577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.731195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.731573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.731588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.743263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.743736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.755317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.755702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.755718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.767423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.767808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.767823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.779617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.780002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.791832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.792291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.792306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.804056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.804433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.804448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.816235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.816643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.816658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.828317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.828552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.828566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.840447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.840856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.840871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.852523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.852889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.852904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.864652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.865010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.876794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.877201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.877217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.888894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.889270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.889285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.119 [2024-07-24 23:16:44.900968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.119 [2024-07-24 23:16:44.901212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.119 [2024-07-24 23:16:44.901227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.913070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.913486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.913501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.925184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.925435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.925450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.937281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.937515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.937529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.949396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.949774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.949789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.961580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.962005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.962020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.973721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.974127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.974142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.379 [2024-07-24 23:16:44.985881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.379 [2024-07-24 23:16:44.986338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.379 [2024-07-24 23:16:44.986353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:44.997962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:44.998315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:44.998331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.010020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.010410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.010426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.022120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.022539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.022554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.034205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.034649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.034664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.046360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.046808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.046823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.058539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.058902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.058917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.070617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.070884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.070899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.082707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.083142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.083158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.094744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.095122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.106843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.107277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.107292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.118907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.119169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.119184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.131004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.131287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.131302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.143111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.143488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.143503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.380 [2024-07-24 23:16:45.155212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.380 [2024-07-24 23:16:45.155560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.380 [2024-07-24 23:16:45.155575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.167246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.167673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.167688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.179362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.179592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.179606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.191478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.191906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.191921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.203607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.203999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.204014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.215675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.215921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.215936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.227798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.228240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.228256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.239991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.240338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.240353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.252170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.252590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.252604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.264242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.264498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.264513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.276347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.276775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.276791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.288414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.288860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.288875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.300540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.300984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.300999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.312682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.313110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.313125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.325003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.325411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.325426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.337087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.337333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.337355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.349275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.349642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.640 [2024-07-24 23:16:45.349657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.640 [2024-07-24 23:16:45.361358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.640 [2024-07-24 23:16:45.361724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.361739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.641 [2024-07-24 23:16:45.373456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.641 [2024-07-24 23:16:45.373842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.373857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.641 [2024-07-24 23:16:45.385595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.641 [2024-07-24 23:16:45.386058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.386074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.641 [2024-07-24 23:16:45.397664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.641 [2024-07-24 23:16:45.398121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.398137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.641 [2024-07-24 23:16:45.409786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.641 [2024-07-24 23:16:45.410156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.641 [2024-07-24 23:16:45.421881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.641 [2024-07-24 23:16:45.422305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.641 [2024-07-24 23:16:45.422321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.901 [2024-07-24 23:16:45.433996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.901 [2024-07-24 23:16:45.434236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.901 [2024-07-24 23:16:45.434251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.901 [2024-07-24 23:16:45.446115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.901 [2024-07-24 23:16:45.446543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.901 [2024-07-24 23:16:45.446558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.901 [2024-07-24 23:16:45.458255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.901 [2024-07-24 23:16:45.458636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.901 [2024-07-24 23:16:45.458651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.470418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.470672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.470686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.482526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.482865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.482881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.494626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.495020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.506738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.507174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.507189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.518875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.519341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.519359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.530973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.531421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.531436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.543059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.543495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.543510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.555147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.555567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.567325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.567668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.567683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.579449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.579866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.579881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.591491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.591938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.603597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.603971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.615727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.616070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.616085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.627822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.628099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.628114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.640029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.640461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.640476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.652123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.652564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.652579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.664233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.664650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.664665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.902 [2024-07-24 23:16:45.676282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:27.902 [2024-07-24 23:16:45.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.902 [2024-07-24 23:16:45.676680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.163 [2024-07-24 23:16:45.688550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:28.163 [2024-07-24 23:16:45.688991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.163 [2024-07-24 23:16:45.689007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.163 [2024-07-24 23:16:45.700649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:28.163 [2024-07-24 23:16:45.701002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.163 [2024-07-24 23:16:45.701017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.163 [2024-07-24 23:16:45.712738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:28.163 [2024-07-24 23:16:45.713197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.163 [2024-07-24 23:16:45.713212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.163 [2024-07-24 23:16:45.724830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1925fc0) with pdu=0x2000190fef90 00:28:28.163 [2024-07-24 23:16:45.725193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.163 [2024-07-24 23:16:45.725208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.163 00:28:28.163 Latency(us) 00:28:28.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.163 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.163 nvme0n1 : 2.01 21030.22 82.15 0.00 0.00 6074.72 5652.48 12506.45 00:28:28.163 =================================================================================================================== 00:28:28.163 Total : 21030.22 82.15 0.00 0.00 6074.72 5652.48 12506.45 00:28:28.163 0 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.163 | .driver_specific 00:28:28.163 | .nvme_error 00:28:28.163 | .status_code 00:28:28.163 | .command_transient_transport_error' 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1032596 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1032596 ']' 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1032596 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1032596 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1032596' 00:28:28.163 killing process with pid 1032596 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1032596 00:28:28.163 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.163 00:28:28.163 Latency(us) 00:28:28.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.163 =================================================================================================================== 00:28:28.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.163 23:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1032596 00:28:28.423 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:28.423 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:28.423 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:28.423 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1033297 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1033297 /var/tmp/bperf.sock 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1033297 ']' 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.424 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.424 [2024-07-24 23:16:46.116701] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:28.424 [2024-07-24 23:16:46.116763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033297 ] 00:28:28.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.424 Zero copy mechanism will not be used. 00:28:28.424 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.424 [2024-07-24 23:16:46.196686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.684 [2024-07-24 23:16:46.249850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.254 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.254 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:29.254 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.254 23:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.254 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.254 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.254 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.515 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.515 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.515 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.776 nvme0n1 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.776 23:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.776 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.776 Zero copy mechanism will not be used. 00:28:29.776 Running I/O for 2 seconds... 00:28:29.776 [2024-07-24 23:16:47.538191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:29.776 [2024-07-24 23:16:47.538275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.777 [2024-07-24 23:16:47.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.777 [2024-07-24 23:16:47.547332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:29.777 [2024-07-24 23:16:47.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.777 [2024-07-24 23:16:47.547673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.777 [2024-07-24 23:16:47.557765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:29.777 [2024-07-24 23:16:47.558119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.777 [2024-07-24 23:16:47.558137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.567377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.567479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.567495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.577477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.577905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.588105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.588437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.588454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.597099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.597348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.597366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.605775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.606093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.617003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.617326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.627397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.627639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.627657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.638470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.638737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.638760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.648558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.648792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.648809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.659505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.660001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.660019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.670077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.670397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.670415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.681147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.681466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.690126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.690355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.690372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.699035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.699192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.699206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.708683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.709031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.709048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.716845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.717062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.724343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.724558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.724575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.732270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.732593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.740991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.741446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.741464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.748455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.038 [2024-07-24 23:16:47.748680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.038 [2024-07-24 23:16:47.748696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.038 [2024-07-24 23:16:47.755237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.755571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.761424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.761639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.761655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.767733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.768058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.768079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.776263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.776489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.785966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.786294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.786312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.795832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.795985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.806644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.806957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.806975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.039 [2024-07-24 23:16:47.816103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.039 [2024-07-24 23:16:47.816321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.039 [2024-07-24 23:16:47.816337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.300 [2024-07-24 23:16:47.825803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.300 [2024-07-24 23:16:47.826029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-07-24 23:16:47.826046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.300 [2024-07-24 23:16:47.835301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.300 [2024-07-24 23:16:47.835471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-07-24 23:16:47.835486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.300 [2024-07-24 23:16:47.845556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.300 [2024-07-24 23:16:47.845905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-07-24 23:16:47.845922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.300 [2024-07-24 23:16:47.856678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.857008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.857025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.865555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.865787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.865803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.876706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.877046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.877063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.886762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.886989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.887006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.897149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.897470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.897488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.907355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.907670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.907688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.916780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.917155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.917172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.927482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.927806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.927824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.935879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.936189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.936205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.946300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.946601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.946618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.955032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.955346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.955364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.962074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.962289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.962306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.968911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.969222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.969239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.977362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.977671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.986086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.986409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.986426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:47.993835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:47.994156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:47.994173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.002356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.002582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.002597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.012035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.012345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.012365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.018737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.019097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.019114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.026874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.027216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.027233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.034904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.035129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.035145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.042320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.042537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.042553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.050469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.050791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.050808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.058651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.058736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.058755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.065549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.065769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.065785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.073917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.074237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.074253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.301 [2024-07-24 23:16:48.080905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.301 [2024-07-24 23:16:48.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-07-24 23:16:48.081252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.087847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.088161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.088179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.094555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.094787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.094802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.100072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.100413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.107735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.108102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.108119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.114033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.114365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.114381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.124395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.124726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.124742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.131862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.132218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.132234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.138985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.139329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.139346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.146878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.147093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.147109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.153181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.153526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.161896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.162246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.162263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.171425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.171783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.171800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.563 [2024-07-24 23:16:48.178722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.563 [2024-07-24 23:16:48.179182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.563 [2024-07-24 23:16:48.179199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.187973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.188290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.188307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.194857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.195158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.195176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.204686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.204904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.204920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.210943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.211159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.211178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.217745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.218098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.218115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.225392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.225617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.225633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.233031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.233246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.233261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.241112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.241537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.241555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.252127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.252487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.252503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.261736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.262048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.262065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.270969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.271294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.271311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.279185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.279522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.287010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.287234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.292936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.293271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.293288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.301786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.302113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.302130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.310762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.311170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.311187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.320436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.320760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.320777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.330606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.330955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.564 [2024-07-24 23:16:48.340063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.564 [2024-07-24 23:16:48.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.564 [2024-07-24 23:16:48.340430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.349989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.350100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.350115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.360459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.360767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.360787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.371365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.371703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.382746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.382978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.382994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.390889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.390976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.390991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.398797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.399022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.399038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.409143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.409482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.419396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.419624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.419640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.431053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.431395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.440082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.440423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.440440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.450356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.450775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.450792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.461449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.461678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.461694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.469404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.469724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.469741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.476492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.476706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.482764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.483135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.483152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.490802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.491134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.491151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.498066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.498391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.498408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.506036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.506252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.506268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.515790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.516136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.516153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.522275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.522490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.522506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.529801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.530165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.530182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.537123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.537338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.826 [2024-07-24 23:16:48.537354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.826 [2024-07-24 23:16:48.544077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.826 [2024-07-24 23:16:48.544292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.544308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.553021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.553330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.553346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.563315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.563530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.563546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.572031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.572394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.572411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.579645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.579874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.579890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.589303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.589400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.589418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.596275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.596488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.596505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.827 [2024-07-24 23:16:48.604653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:30.827 [2024-07-24 23:16:48.604880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.827 [2024-07-24 23:16:48.604896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.613240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.613574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.613591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.623779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.624089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.624105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.633008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.633346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.633363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.641945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.642268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.642284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.652189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.652414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.652429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.659127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.659457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.659474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.668556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.668898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.668914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.677865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.678290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.678307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.686183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.686409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.686425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.694453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.694775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.694792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.700805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.701022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.701038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.707079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.707389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.707405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.716676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.717005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.717022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.722853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.723069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.723084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.730310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.730626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.730643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.739348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.739461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.739475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.089 [2024-07-24 23:16:48.748216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.089 [2024-07-24 23:16:48.748532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.089 [2024-07-24 23:16:48.748548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.756237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.756577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.756594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.765218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.765574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.773950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.774286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.774303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.780931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.781248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.787824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.788168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.788185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.794759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.794986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.795001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.800470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.800554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.810143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.810367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.810383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.815887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.816226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.816242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.823508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.823835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.823851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.831021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.831321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.831338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.836976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.837191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.837207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.843869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.844215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.844232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.853088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.853405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.853422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.863554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.863774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.863790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.090 [2024-07-24 23:16:48.870779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.090 [2024-07-24 23:16:48.871009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.090 [2024-07-24 23:16:48.871024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.351 [2024-07-24 23:16:48.876953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.351 [2024-07-24 23:16:48.877285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-24 23:16:48.877302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.351 [2024-07-24 23:16:48.883989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.351 [2024-07-24 23:16:48.884332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.351 [2024-07-24 23:16:48.884348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.351 [2024-07-24 23:16:48.889534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.889748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.889770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.895998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.896316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.896333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.903020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.903360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.903376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.909423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.909550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.909565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.919838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.919921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.919936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.928928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.929155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.929171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.938788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.939013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.939029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.948990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.949217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.949233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.959624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.959853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.959869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.970175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.970519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.979770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.979890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.979905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:48.990182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:48.990517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:48.990534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.000464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.000784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.000801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.011648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.011983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.011999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.022483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.022698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.033316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.033653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.033670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.044578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.044755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.044769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.055758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.055854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.055868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.066789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.067113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.067129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.077529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.077865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.077881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.088402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.088647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.096882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.097112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.097128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.106602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.106709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.106724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.115870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.115972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.115987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.126163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.126404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.352 [2024-07-24 23:16:49.136149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.352 [2024-07-24 23:16:49.136451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.352 [2024-07-24 23:16:49.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.614 [2024-07-24 23:16:49.145902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.614 [2024-07-24 23:16:49.146119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.614 [2024-07-24 23:16:49.146135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.614 [2024-07-24 23:16:49.155767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.156118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.156134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.164470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.164606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.164622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.173004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.173381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.173397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.181467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.181787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.181803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.187926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.188297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.188319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.194663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.195023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.195039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.203156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.203506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.203523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.209968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.210174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.216608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.216816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.216832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.222533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.222738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.222760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.230574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.230819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.230841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.237003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.237385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.244652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.245007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.245025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.253160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.253421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.253437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.260096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.260494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.260511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.267673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.268036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.268053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.277205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.277411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.277427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.285673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.286052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.286069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.293107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.293314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.293330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.299723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.300048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.300065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.307192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.307413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.307429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.316677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.316985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.317002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.327293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.327812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.327830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.337731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.338014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.338030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.347507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.347756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.347772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.358175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.615 [2024-07-24 23:16:49.358579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-07-24 23:16:49.358596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-07-24 23:16:49.367733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.616 [2024-07-24 23:16:49.368065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-07-24 23:16:49.368082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.616 [2024-07-24 23:16:49.379229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.616 [2024-07-24 23:16:49.379513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-07-24 23:16:49.379530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.616 [2024-07-24 23:16:49.385725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.616 [2024-07-24 23:16:49.386085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-07-24 23:16:49.386102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.616 [2024-07-24 23:16:49.391939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.616 [2024-07-24 23:16:49.392316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-07-24 23:16:49.392332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.616 [2024-07-24 23:16:49.397923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.616 [2024-07-24 23:16:49.398278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-07-24 23:16:49.398298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.405546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.405756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.411784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.411989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.412005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.419083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.419465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.419482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.427830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.428238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.428255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.435791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.436193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.436210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.443154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.443468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.443485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.450634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.451044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.451060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.456984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.457210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.457225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.463071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.463287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.463304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.471517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.471750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.471770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.477332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.477555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.477571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.484947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.485330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.485346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.491631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.491837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.491853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.497623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.497830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.497846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.503631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.503848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.503864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.512388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.512683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.512699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.521551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.521819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.521836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-07-24 23:16:49.529953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1927c90) with pdu=0x2000190fef90 00:28:31.877 [2024-07-24 23:16:49.530290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-07-24 23:16:49.530306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.877 00:28:31.877 Latency(us) 00:28:31.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.877 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.877 nvme0n1 : 2.00 3634.88 454.36 0.00 0.00 4395.48 2389.33 12069.55 00:28:31.877 =================================================================================================================== 00:28:31.877 Total : 3634.88 454.36 0.00 0.00 4395.48 2389.33 12069.55 00:28:31.877 0 00:28:31.877 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.877 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.877 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.877 | .driver_specific 00:28:31.877 | .nvme_error 00:28:31.877 | .status_code 00:28:31.877 | .command_transient_transport_error' 00:28:31.877 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1033297 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1033297 ']' 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1033297 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033297 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:32.138 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033297' 00:28:32.139 killing process with pid 1033297 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1033297 00:28:32.139 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.139 00:28:32.139 Latency(us) 00:28:32.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.139 =================================================================================================================== 00:28:32.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1033297 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1030896 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1030896 ']' 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1030896 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.139 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030896 00:28:32.399 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.399 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.399 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030896' 00:28:32.399 killing process with pid 1030896 00:28:32.399 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1030896 00:28:32.399 23:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1030896 00:28:32.399 00:28:32.399 real 0m16.270s 00:28:32.399 user 0m31.784s 00:28:32.399 sys 0m3.289s 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.399 ************************************ 00:28:32.399 END TEST nvmf_digest_error 00:28:32.399 ************************************ 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.399 rmmod nvme_tcp 00:28:32.399 rmmod nvme_fabrics 00:28:32.399 rmmod nvme_keyring 00:28:32.399 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1030896 ']' 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1030896 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1030896 ']' 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1030896 00:28:32.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1030896) - No such process 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1030896 is not found' 00:28:32.660 Process with pid 1030896 is not found 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.660 23:16:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.574 00:28:34.574 real 0m42.572s 00:28:34.574 user 1m5.406s 00:28:34.574 sys 0m12.514s 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.574 ************************************ 00:28:34.574 END TEST nvmf_digest 00:28:34.574 ************************************ 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.574 ************************************ 00:28:34.574 START TEST nvmf_bdevperf 00:28:34.574 ************************************ 00:28:34.574 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.835 * Looking for test storage... 00:28:34.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.835 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.836 23:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:42.982 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:42.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:42.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:42.983 Found net devices under 0000:31:00.0: cvl_0_0 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:42.983 Found net devices under 0000:31:00.1: cvl_0_1 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.983 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.745 ms 00:28:43.244 00:28:43.244 --- 10.0.0.2 ping statistics --- 00:28:43.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.244 rtt min/avg/max/mdev = 0.745/0.745/0.745/0.000 ms 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:43.244 00:28:43.244 --- 10.0.0.1 ping statistics --- 00:28:43.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.244 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1038662 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1038662 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1038662 ']' 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.244 23:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.244 [2024-07-24 23:17:00.886368] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:43.244 [2024-07-24 23:17:00.886433] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.244 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.244 [2024-07-24 23:17:00.983965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.504 [2024-07-24 23:17:01.068935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.504 [2024-07-24 23:17:01.068994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.504 [2024-07-24 23:17:01.069003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.504 [2024-07-24 23:17:01.069010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.504 [2024-07-24 23:17:01.069016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.504 [2024-07-24 23:17:01.069153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.504 [2024-07-24 23:17:01.069316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.504 [2024-07-24 23:17:01.069316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 [2024-07-24 23:17:01.691670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 Malloc0 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 [2024-07-24 23:17:01.756790] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.074 { 00:28:44.074 "params": { 00:28:44.074 "name": "Nvme$subsystem", 00:28:44.074 "trtype": "$TEST_TRANSPORT", 00:28:44.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.074 "adrfam": "ipv4", 00:28:44.074 "trsvcid": "$NVMF_PORT", 00:28:44.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.074 "hdgst": ${hdgst:-false}, 00:28:44.074 "ddgst": ${ddgst:-false} 00:28:44.074 }, 00:28:44.074 "method": "bdev_nvme_attach_controller" 00:28:44.074 } 00:28:44.074 EOF 00:28:44.074 )") 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:44.074 23:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:44.074 "params": { 00:28:44.074 "name": "Nvme1", 00:28:44.074 "trtype": "tcp", 00:28:44.074 "traddr": "10.0.0.2", 00:28:44.074 "adrfam": "ipv4", 00:28:44.074 "trsvcid": "4420", 00:28:44.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.074 "hdgst": false, 00:28:44.074 "ddgst": false 00:28:44.074 }, 00:28:44.074 "method": "bdev_nvme_attach_controller" 00:28:44.074 }' 00:28:44.074 [2024-07-24 23:17:01.818963] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:44.074 [2024-07-24 23:17:01.819067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038872 ] 00:28:44.074 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.334 [2024-07-24 23:17:01.887455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.334 [2024-07-24 23:17:01.952946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.334 Running I/O for 1 seconds... 00:28:45.717 00:28:45.717 Latency(us) 00:28:45.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.717 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:45.717 Verification LBA range: start 0x0 length 0x4000 00:28:45.717 Nvme1n1 : 1.01 9003.25 35.17 0.00 0.00 14155.01 3112.96 16384.00 00:28:45.717 =================================================================================================================== 00:28:45.717 Total : 9003.25 35.17 0.00 0.00 14155.01 3112.96 16384.00 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1039098 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.717 { 00:28:45.717 "params": { 00:28:45.717 "name": "Nvme$subsystem", 00:28:45.717 "trtype": "$TEST_TRANSPORT", 00:28:45.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.717 "adrfam": "ipv4", 00:28:45.717 "trsvcid": "$NVMF_PORT", 00:28:45.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.717 "hdgst": ${hdgst:-false}, 00:28:45.717 "ddgst": ${ddgst:-false} 00:28:45.717 }, 00:28:45.717 "method": "bdev_nvme_attach_controller" 00:28:45.717 } 00:28:45.717 EOF 00:28:45.717 )") 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:45.717 23:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:45.717 "params": { 00:28:45.717 "name": "Nvme1", 00:28:45.717 "trtype": "tcp", 00:28:45.717 "traddr": "10.0.0.2", 00:28:45.717 "adrfam": "ipv4", 00:28:45.717 "trsvcid": "4420", 00:28:45.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.718 "hdgst": false, 00:28:45.718 "ddgst": false 00:28:45.718 }, 00:28:45.718 "method": "bdev_nvme_attach_controller" 00:28:45.718 }' 00:28:45.718 [2024-07-24 23:17:03.293527] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:45.718 [2024-07-24 23:17:03.293582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039098 ] 00:28:45.718 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.718 [2024-07-24 23:17:03.359154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.718 [2024-07-24 23:17:03.422067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.977 Running I/O for 15 seconds... 00:28:48.523 23:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1038662 00:28:48.523 23:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:48.523 [2024-07-24 23:17:06.261390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.523 [2024-07-24 23:17:06.261432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.523 [2024-07-24 23:17:06.261452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.523 [2024-07-24 23:17:06.261462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.523 [2024-07-24 23:17:06.261473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.523 [2024-07-24 23:17:06.261482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.523 [2024-07-24 23:17:06.261493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.523 [2024-07-24 23:17:06.261500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.523 [2024-07-24 23:17:06.261510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.524 [2024-07-24 23:17:06.261963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.261989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.261996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.524 [2024-07-24 23:17:06.262152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.524 [2024-07-24 23:17:06.262159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.525 [2024-07-24 23:17:06.262634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.525 [2024-07-24 23:17:06.262793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.525 [2024-07-24 23:17:06.262800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.262896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.262989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.262996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.526 [2024-07-24 23:17:06.263142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.526 [2024-07-24 23:17:06.263302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.526 [2024-07-24 23:17:06.263311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.527 [2024-07-24 23:17:06.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.527 [2024-07-24 23:17:06.263644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8630 is same with the state(5) to be set 00:28:48.527 [2024-07-24 23:17:06.263661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:48.527 [2024-07-24 23:17:06.263666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:48.527 [2024-07-24 23:17:06.263673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116296 len:8 PRP1 0x0 PRP2 0x0 00:28:48.527 [2024-07-24 23:17:06.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.527 [2024-07-24 23:17:06.263718] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12c8630 was disconnected and freed. reset controller. 00:28:48.527 [2024-07-24 23:17:06.267264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.527 [2024-07-24 23:17:06.267312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.527 [2024-07-24 23:17:06.268195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.527 [2024-07-24 23:17:06.268232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.527 [2024-07-24 23:17:06.268242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.527 [2024-07-24 23:17:06.268483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.527 [2024-07-24 23:17:06.268710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.527 [2024-07-24 23:17:06.268719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.527 [2024-07-24 23:17:06.268727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.527 [2024-07-24 23:17:06.272286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.527 [2024-07-24 23:17:06.281497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.527 [2024-07-24 23:17:06.282204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.527 [2024-07-24 23:17:06.282223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.527 [2024-07-24 23:17:06.282231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.527 [2024-07-24 23:17:06.282452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.527 [2024-07-24 23:17:06.282671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.527 [2024-07-24 23:17:06.282679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.527 [2024-07-24 23:17:06.282687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.527 [2024-07-24 23:17:06.286241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.527 [2024-07-24 23:17:06.295439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.527 [2024-07-24 23:17:06.296136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.527 [2024-07-24 23:17:06.296172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.527 [2024-07-24 23:17:06.296183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.527 [2024-07-24 23:17:06.296423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.527 [2024-07-24 23:17:06.296646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.527 [2024-07-24 23:17:06.296654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.527 [2024-07-24 23:17:06.296662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.527 [2024-07-24 23:17:06.300232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.309448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.310161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.310198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.310210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.310462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.310686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.310694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.310702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.314269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.323286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.324017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.324054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.324065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.324304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.324526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.324535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.324543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.328145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.337149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.337776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.337812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.337825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.338067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.338290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.338298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.338306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.341864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.351070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.351669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.351706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.351718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.351969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.352193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.352201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.352208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.355762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.364966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.365663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.365700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.365714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.365963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.366186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.366195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.366202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.369754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.378959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.379647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.379684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.379694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.379942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.380165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.380173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.380181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.383730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.392953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.790 [2024-07-24 23:17:06.393667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.790 [2024-07-24 23:17:06.393704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.790 [2024-07-24 23:17:06.393716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.790 [2024-07-24 23:17:06.393966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.790 [2024-07-24 23:17:06.394190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.790 [2024-07-24 23:17:06.394199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.790 [2024-07-24 23:17:06.394208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.790 [2024-07-24 23:17:06.397775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.790 [2024-07-24 23:17:06.406795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.407476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.407513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.407523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.407771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.407995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.408008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.408016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.411583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.420596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.421293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.421330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.421341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.421580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.421811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.421820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.421827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.425377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.434588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.435241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.435259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.435267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.435487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.435706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.435714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.435721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.439271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.448474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.449160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.449197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.449207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.449446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.449669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.449677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.449685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.453248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.462396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.463119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.463156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.463167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.463406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.463628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.463636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.463643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.467198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.476196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.476991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.477028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.477038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.477277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.477500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.477508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.477516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.481070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.490073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.490813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.490861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.491099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.491322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.491330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.491338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.494895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.503910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.504637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.504673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.504684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.504936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.505160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.505168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.505176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.508727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.517731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.518342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.518361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.518369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.518589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.518853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.518863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.518870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.522423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.531643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.532329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.791 [2024-07-24 23:17:06.532366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.791 [2024-07-24 23:17:06.532377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.791 [2024-07-24 23:17:06.532616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.791 [2024-07-24 23:17:06.532849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.791 [2024-07-24 23:17:06.532858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.791 [2024-07-24 23:17:06.532865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.791 [2024-07-24 23:17:06.536421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.791 [2024-07-24 23:17:06.545645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.791 [2024-07-24 23:17:06.546293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.792 [2024-07-24 23:17:06.546312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.792 [2024-07-24 23:17:06.546319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.792 [2024-07-24 23:17:06.546539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.792 [2024-07-24 23:17:06.546766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.792 [2024-07-24 23:17:06.546774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.792 [2024-07-24 23:17:06.546785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.792 [2024-07-24 23:17:06.550335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.792 [2024-07-24 23:17:06.559534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.792 [2024-07-24 23:17:06.560105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.792 [2024-07-24 23:17:06.560142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.792 [2024-07-24 23:17:06.560152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.792 [2024-07-24 23:17:06.560391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.792 [2024-07-24 23:17:06.560614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.792 [2024-07-24 23:17:06.560622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.792 [2024-07-24 23:17:06.560630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.792 [2024-07-24 23:17:06.564190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.792 [2024-07-24 23:17:06.573418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.792 [2024-07-24 23:17:06.574104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.792 [2024-07-24 23:17:06.574141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:48.792 [2024-07-24 23:17:06.574151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:48.792 [2024-07-24 23:17:06.574390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:48.792 [2024-07-24 23:17:06.574613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.792 [2024-07-24 23:17:06.574622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.792 [2024-07-24 23:17:06.574629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.054 [2024-07-24 23:17:06.578188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.054 [2024-07-24 23:17:06.587397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.054 [2024-07-24 23:17:06.588071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.054 [2024-07-24 23:17:06.588108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.054 [2024-07-24 23:17:06.588118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.054 [2024-07-24 23:17:06.588358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.054 [2024-07-24 23:17:06.588581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.054 [2024-07-24 23:17:06.588589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.588597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.592152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.601377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.602081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.602118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.602128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.602367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.602590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.602598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.602606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.606165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.615387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.616017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.616036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.616044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.616264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.616483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.616491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.616498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.620053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.629273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.629997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.630034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.630045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.630284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.630507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.630516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.630523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.634090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.643102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.643842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.643878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.643889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.644133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.644356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.644364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.644372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.647929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.656931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.657580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.657598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.657606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.657834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.658055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.658063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.658070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.661613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.670820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.671536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.671573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.671583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.671831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.672055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.672063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.672070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.675621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.684621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.685318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.685355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.685366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.685605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.685834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.685844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.685857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.689409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.698435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.699080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.699099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.699107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.699327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.699547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.699555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.699562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.703115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.712359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.712944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.712960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.712967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.713187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.055 [2024-07-24 23:17:06.713415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.055 [2024-07-24 23:17:06.713422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.055 [2024-07-24 23:17:06.713429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.055 [2024-07-24 23:17:06.716989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.055 [2024-07-24 23:17:06.726211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.055 [2024-07-24 23:17:06.726873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.055 [2024-07-24 23:17:06.726910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.055 [2024-07-24 23:17:06.726921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.055 [2024-07-24 23:17:06.727164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.727387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.727396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.727403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.730964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.740179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.740879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.740920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.740932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.741172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.741394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.741403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.741411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.744970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.753975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.754671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.754709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.754720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.754972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.755195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.755204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.755211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.758765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.767771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.768496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.768533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.768544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.768790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.769014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.769023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.769030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.772582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.781590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.782279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.782316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.782327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.782566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.782800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.782810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.782817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.786369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.795579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.796223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.796243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.796251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.796471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.796691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.796698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.796705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.800274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.809484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.810175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.810223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.810461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.810684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.810692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.810700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.814270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.823482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.824218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.824254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.824265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.824504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.824726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.824735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.824743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.056 [2024-07-24 23:17:06.828310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.056 [2024-07-24 23:17:06.837319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.056 [2024-07-24 23:17:06.838068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.056 [2024-07-24 23:17:06.838105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.056 [2024-07-24 23:17:06.838116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.056 [2024-07-24 23:17:06.838355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.056 [2024-07-24 23:17:06.838577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.056 [2024-07-24 23:17:06.838585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.056 [2024-07-24 23:17:06.838593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.319 [2024-07-24 23:17:06.842154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.851162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.851851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.851888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.851900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.852141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.852364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.852372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.852379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.855939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.865154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.865757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.865776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.865784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.866004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.866223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.866230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.866237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.869789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.878998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.879642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.879678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.879696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.879942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.880166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.880174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.880181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.883731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.892945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.893595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.893613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.893621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.893847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.894067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.894074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.894081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.897629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.906853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.907579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.907626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.907872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.908096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.908104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.908111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.911664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.920680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.921428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.921465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.921476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.921715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.921945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.921959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.921967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.925517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.934521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.935143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.935162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.935169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.935389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.935609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.935617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.935624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.939177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.948385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.948968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.949005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.949016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.949255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.949477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.949485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.949493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.953053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.962268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.962967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.963004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.963014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.963253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.963476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.963484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.963492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.967051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.320 [2024-07-24 23:17:06.976269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.320 [2024-07-24 23:17:06.977011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.320 [2024-07-24 23:17:06.977048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.320 [2024-07-24 23:17:06.977058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.320 [2024-07-24 23:17:06.977297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.320 [2024-07-24 23:17:06.977521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.320 [2024-07-24 23:17:06.977529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.320 [2024-07-24 23:17:06.977537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.320 [2024-07-24 23:17:06.981094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:06.990097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:06.990764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:06.990782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:06.990790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:06.991010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:06.991230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:06.991238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:06.991245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:06.994795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.004014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.004790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.004827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.004838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.005077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.005300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.005308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.005315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.008876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.017888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.018533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.018552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.018560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.018790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.019010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.019018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.019025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.022576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.031787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.032378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.032394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.032401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.032620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.032845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.032854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.032861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.036405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.045613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.046183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.046220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.046230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.046469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.046691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.046700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.046707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.050264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.059477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.060171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.060208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.060218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.060458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.060681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.060689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.060701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.064260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.073475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.074097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.074134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.074145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.074384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.074606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.074615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.074622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.078182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.087396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.088117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.088153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.088164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.088403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.088626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.088634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.088642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.321 [2024-07-24 23:17:07.092199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.321 [2024-07-24 23:17:07.101214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.321 [2024-07-24 23:17:07.101952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.321 [2024-07-24 23:17:07.101988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.321 [2024-07-24 23:17:07.101999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.321 [2024-07-24 23:17:07.102238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.321 [2024-07-24 23:17:07.102460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.321 [2024-07-24 23:17:07.102469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.321 [2024-07-24 23:17:07.102476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.584 [2024-07-24 23:17:07.106037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.584 [2024-07-24 23:17:07.115049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.584 [2024-07-24 23:17:07.115705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.584 [2024-07-24 23:17:07.115723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.584 [2024-07-24 23:17:07.115731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.584 [2024-07-24 23:17:07.115956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.584 [2024-07-24 23:17:07.116176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.584 [2024-07-24 23:17:07.116184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.584 [2024-07-24 23:17:07.116191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.584 [2024-07-24 23:17:07.119736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.584 [2024-07-24 23:17:07.128947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.584 [2024-07-24 23:17:07.129672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.584 [2024-07-24 23:17:07.129709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.584 [2024-07-24 23:17:07.129721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.584 [2024-07-24 23:17:07.129969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.584 [2024-07-24 23:17:07.130193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.584 [2024-07-24 23:17:07.130201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.584 [2024-07-24 23:17:07.130208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.584 [2024-07-24 23:17:07.133764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.584 [2024-07-24 23:17:07.142771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.584 [2024-07-24 23:17:07.143399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.584 [2024-07-24 23:17:07.143418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.584 [2024-07-24 23:17:07.143426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.584 [2024-07-24 23:17:07.143646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.584 [2024-07-24 23:17:07.143872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.584 [2024-07-24 23:17:07.143880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.584 [2024-07-24 23:17:07.143887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.584 [2024-07-24 23:17:07.147433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.584 [2024-07-24 23:17:07.156639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.584 [2024-07-24 23:17:07.157325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.584 [2024-07-24 23:17:07.157362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.157372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.157611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.157845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.157855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.157862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.161413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.170628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.171287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.171306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.171314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.171533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.171759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.171767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.171774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.175321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.184529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.185219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.185255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.185266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.185505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.185727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.185736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.185743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.189304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.198530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.199221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.199258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.199269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.199508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.199730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.199738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.199745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.203312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.212529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.213181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.213199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.213206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.213433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.213653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.213661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.213670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.217223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.226433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.226966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.226983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.226991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.227210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.227428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.227436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.227442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.230991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.240411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.241106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.241142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.241153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.241392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.241615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.241623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.241631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.245191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.254411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.255182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.255223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.255234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.255473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.255695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.255704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.255711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.259273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.268279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.269040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.269076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.269087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.585 [2024-07-24 23:17:07.269326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.585 [2024-07-24 23:17:07.269549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.585 [2024-07-24 23:17:07.269558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.585 [2024-07-24 23:17:07.269565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.585 [2024-07-24 23:17:07.273123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.585 [2024-07-24 23:17:07.282129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.585 [2024-07-24 23:17:07.282656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.585 [2024-07-24 23:17:07.282675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.585 [2024-07-24 23:17:07.282683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.282908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.283127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.283135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.283142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.286689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.296112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.296740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.296760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.296768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.296987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.297214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.297222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.297229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.300872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.310085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.310801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.310838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.310850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.311092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.311314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.311323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.311330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.314901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.323896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.324635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.324672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.324682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.324929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.325152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.325161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.325168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.328718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.337718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.338460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.338497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.338508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.338747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.338978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.338986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.338993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.342546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.351552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.352190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.352209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.352217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.352437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.352656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.352664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.352671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.586 [2024-07-24 23:17:07.356221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.586 [2024-07-24 23:17:07.365425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.586 [2024-07-24 23:17:07.365946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.586 [2024-07-24 23:17:07.365962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.586 [2024-07-24 23:17:07.365969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.586 [2024-07-24 23:17:07.366189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.586 [2024-07-24 23:17:07.366407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.586 [2024-07-24 23:17:07.366415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.586 [2024-07-24 23:17:07.366422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.849 [2024-07-24 23:17:07.369969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.849 [2024-07-24 23:17:07.379379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.849 [2024-07-24 23:17:07.380074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.849 [2024-07-24 23:17:07.380110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.849 [2024-07-24 23:17:07.380121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.849 [2024-07-24 23:17:07.380360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.849 [2024-07-24 23:17:07.380583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.849 [2024-07-24 23:17:07.380591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.849 [2024-07-24 23:17:07.380599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.849 [2024-07-24 23:17:07.384159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.849 [2024-07-24 23:17:07.393368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.849 [2024-07-24 23:17:07.394059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.849 [2024-07-24 23:17:07.394096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.849 [2024-07-24 23:17:07.394111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.849 [2024-07-24 23:17:07.394350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.849 [2024-07-24 23:17:07.394572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.849 [2024-07-24 23:17:07.394580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.849 [2024-07-24 23:17:07.394588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.849 [2024-07-24 23:17:07.398155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.849 [2024-07-24 23:17:07.407363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.849 [2024-07-24 23:17:07.407998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.849 [2024-07-24 23:17:07.408035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.849 [2024-07-24 23:17:07.408045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.849 [2024-07-24 23:17:07.408284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.849 [2024-07-24 23:17:07.408508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.849 [2024-07-24 23:17:07.408516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.849 [2024-07-24 23:17:07.408523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.849 [2024-07-24 23:17:07.412083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.849 [2024-07-24 23:17:07.421356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.849 [2024-07-24 23:17:07.421968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.849 [2024-07-24 23:17:07.421987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.849 [2024-07-24 23:17:07.421995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.849 [2024-07-24 23:17:07.422215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.849 [2024-07-24 23:17:07.422434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.849 [2024-07-24 23:17:07.422442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.849 [2024-07-24 23:17:07.422449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.849 [2024-07-24 23:17:07.426000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.849 [2024-07-24 23:17:07.435201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.849 [2024-07-24 23:17:07.435730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.849 [2024-07-24 23:17:07.435745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.849 [2024-07-24 23:17:07.435758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.849 [2024-07-24 23:17:07.435979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.849 [2024-07-24 23:17:07.436198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.849 [2024-07-24 23:17:07.436210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.436217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.439765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.449176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.449854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.449891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.449903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.450143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.450367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.450375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.450382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.453941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.463149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.463851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.463889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.463900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.464141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.464363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.464372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.464379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.467938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.477151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.477831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.477868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.477878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.478117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.478340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.478349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.478356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.481913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.491128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.491834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.491870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.491881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.492121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.492343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.492352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.492360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.495917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.504931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.505575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.505593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.505601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.505825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.506045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.506054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.506061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.509608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.518822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.519417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.519433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.519440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.519659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.519883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.519892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.519899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.523443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.532648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.533214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.533230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.533237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.533460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.533679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.533687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.533693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.537242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.546453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.546907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.546927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.546935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.547156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.547375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.547384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.547391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.550943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.560355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.561043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.561080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.561091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.561330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.561552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.561561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.850 [2024-07-24 23:17:07.561568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.850 [2024-07-24 23:17:07.565124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.850 [2024-07-24 23:17:07.574334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.850 [2024-07-24 23:17:07.575040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.850 [2024-07-24 23:17:07.575077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.850 [2024-07-24 23:17:07.575088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.850 [2024-07-24 23:17:07.575328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.850 [2024-07-24 23:17:07.575550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.850 [2024-07-24 23:17:07.575558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.851 [2024-07-24 23:17:07.575570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.851 [2024-07-24 23:17:07.579131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.851 [2024-07-24 23:17:07.588130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.851 [2024-07-24 23:17:07.588833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.851 [2024-07-24 23:17:07.588870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.851 [2024-07-24 23:17:07.588881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.851 [2024-07-24 23:17:07.589119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.851 [2024-07-24 23:17:07.589342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.851 [2024-07-24 23:17:07.589350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.851 [2024-07-24 23:17:07.589357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.851 [2024-07-24 23:17:07.592916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.851 [2024-07-24 23:17:07.601928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.851 [2024-07-24 23:17:07.602620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.851 [2024-07-24 23:17:07.602656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.851 [2024-07-24 23:17:07.602666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.851 [2024-07-24 23:17:07.602913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.851 [2024-07-24 23:17:07.603137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.851 [2024-07-24 23:17:07.603145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.851 [2024-07-24 23:17:07.603153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.851 [2024-07-24 23:17:07.606704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.851 [2024-07-24 23:17:07.615923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.851 [2024-07-24 23:17:07.616659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.851 [2024-07-24 23:17:07.616695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.851 [2024-07-24 23:17:07.616707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.851 [2024-07-24 23:17:07.616958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.851 [2024-07-24 23:17:07.617182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.851 [2024-07-24 23:17:07.617191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.851 [2024-07-24 23:17:07.617198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.851 [2024-07-24 23:17:07.620747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.851 [2024-07-24 23:17:07.629748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.851 [2024-07-24 23:17:07.630429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.851 [2024-07-24 23:17:07.630469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:49.851 [2024-07-24 23:17:07.630480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:49.851 [2024-07-24 23:17:07.630719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:49.851 [2024-07-24 23:17:07.630950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.851 [2024-07-24 23:17:07.630960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.851 [2024-07-24 23:17:07.630967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.851 [2024-07-24 23:17:07.634518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.643728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.644354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.644391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.644401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.644640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.644872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.644881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.644888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.648439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.657648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.658342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.658379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.658389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.658628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.658859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.658868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.658876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.662427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.671636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.672333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.672370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.672380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.672619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.672857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.672866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.672874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.676425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.685631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.686245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.686282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.686293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.686532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.686763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.686772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.686779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.690330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.699546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.700162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.700180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.700188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.700408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.700627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.700634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.700641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.704191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.713393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.714101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.714138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.714148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.714387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.714619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.714628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.112 [2024-07-24 23:17:07.714636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.112 [2024-07-24 23:17:07.718199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.112 [2024-07-24 23:17:07.727199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.112 [2024-07-24 23:17:07.727854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.112 [2024-07-24 23:17:07.727890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.112 [2024-07-24 23:17:07.727902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.112 [2024-07-24 23:17:07.728145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.112 [2024-07-24 23:17:07.728367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.112 [2024-07-24 23:17:07.728376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.728383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.731944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.741158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.741835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.741872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.741883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.742122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.742344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.742352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.742359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.745922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.755140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.755740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.755764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.755772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.755992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.756211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.756219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.756225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.759773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.768976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.769584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.769621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.769636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.769884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.770108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.770117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.770125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.773678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.782893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.783581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.783617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.783628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.783874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.784097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.784106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.784113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.787664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.796875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.797608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.797645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.797655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.797911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.798135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.798143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.798151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.801702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.810701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.811374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.811411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.811421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.811660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.811890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.811904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.811911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.815472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.824681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.825334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.825353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.825360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.825580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.825805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.825813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.825820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.829365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.838568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.839164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.839180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.839187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.839406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.839625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.839632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.839639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.843191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.852390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.853079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.853115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.853125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.853364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.853587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.853595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.113 [2024-07-24 23:17:07.853603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.113 [2024-07-24 23:17:07.857163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.113 [2024-07-24 23:17:07.866377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.113 [2024-07-24 23:17:07.867090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.113 [2024-07-24 23:17:07.867127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.113 [2024-07-24 23:17:07.867138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.113 [2024-07-24 23:17:07.867377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.113 [2024-07-24 23:17:07.867599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.113 [2024-07-24 23:17:07.867607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.114 [2024-07-24 23:17:07.867615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.114 [2024-07-24 23:17:07.871171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.114 [2024-07-24 23:17:07.880170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.114 [2024-07-24 23:17:07.880851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.114 [2024-07-24 23:17:07.880887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.114 [2024-07-24 23:17:07.880898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.114 [2024-07-24 23:17:07.881137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.114 [2024-07-24 23:17:07.881359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.114 [2024-07-24 23:17:07.881367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.114 [2024-07-24 23:17:07.881375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.114 [2024-07-24 23:17:07.884936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.114 [2024-07-24 23:17:07.894147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.114 [2024-07-24 23:17:07.894835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.114 [2024-07-24 23:17:07.894872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.114 [2024-07-24 23:17:07.894883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.114 [2024-07-24 23:17:07.895122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.114 [2024-07-24 23:17:07.895344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.114 [2024-07-24 23:17:07.895353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.114 [2024-07-24 23:17:07.895360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.374 [2024-07-24 23:17:07.898932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.374 [2024-07-24 23:17:07.908145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.374 [2024-07-24 23:17:07.908883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.374 [2024-07-24 23:17:07.908919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.908935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.909176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.909398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.909407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.909414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.912975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.921982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.922689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.922726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.922736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.922984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.923208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.923216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.923224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.926779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.935781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.936367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.936404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.936415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.936654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.936886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.936895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.936903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.940454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.949660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.950355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.950391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.950402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.950641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.950872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.950893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.950900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.954451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.963659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.964319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.964356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.964367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.964606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.964837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.964847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.964854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.968406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.977617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.978352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.978388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.978399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.978638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.978870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.978879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.978887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.982436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:07.991434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:07.992102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:07.992138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:07.992149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:07.992388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:07.992610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:07.992618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:07.992626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:07.996185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:08.005405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:08.006151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:08.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:08.006198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:08.006437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:08.006660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:08.006668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:08.006675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:08.010234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:08.019242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:08.019853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:08.019890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:08.019902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:08.020143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:08.020366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:08.020374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:08.020381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:08.023942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:08.033153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:08.033848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:08.033885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.375 [2024-07-24 23:17:08.033895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.375 [2024-07-24 23:17:08.034134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.375 [2024-07-24 23:17:08.034357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.375 [2024-07-24 23:17:08.034366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.375 [2024-07-24 23:17:08.034373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.375 [2024-07-24 23:17:08.037933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.375 [2024-07-24 23:17:08.047141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.375 [2024-07-24 23:17:08.047642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.375 [2024-07-24 23:17:08.047664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.047672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.047908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.048130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.048138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.048144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.051691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.061109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.061834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.061871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.061883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.062126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.062348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.062356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.062364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.065923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.074922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.075638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.075674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.075685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.075932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.076156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.076165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.076172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.079722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.088721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.089421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.089458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.089469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.089707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.089938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.089948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.089959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.093510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.102520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.103057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.103076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.103083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.103303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.103522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.103529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.103536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.107088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.116508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.117170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.117207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.117218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.117457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.117680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.117688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.117695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.121297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.130301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.131039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.131076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.131086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.131326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.131548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.131557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.131564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.135124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.144125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.144750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.144778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.144786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.145006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.145225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.145233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.145240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.376 [2024-07-24 23:17:08.148788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.376 [2024-07-24 23:17:08.157989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.376 [2024-07-24 23:17:08.158619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.376 [2024-07-24 23:17:08.158634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.376 [2024-07-24 23:17:08.158642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.376 [2024-07-24 23:17:08.158866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.376 [2024-07-24 23:17:08.159086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.376 [2024-07-24 23:17:08.159094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.376 [2024-07-24 23:17:08.159101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.162647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.171853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.172453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.172468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.172475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.172694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.172919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.172927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.172934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.176476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.185722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.186396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.186433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.186443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.186682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.186919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.186928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.186936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.190488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.199697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.200446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.200482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.200493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.200732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.200964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.200973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.200980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.204532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.213534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.214151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.214170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.214178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.214397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.214616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.214624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.214631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.218190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.227394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.228086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.228123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.228134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.228372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.228595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.228603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.228611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.232170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.241386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.242083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.242120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.242132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.242372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.242595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.242603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.242611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.246172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.255205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.255876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.255914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.255925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.256165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.256388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.256396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.638 [2024-07-24 23:17:08.256404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.638 [2024-07-24 23:17:08.259962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.638 [2024-07-24 23:17:08.269177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.638 [2024-07-24 23:17:08.269781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.638 [2024-07-24 23:17:08.269800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.638 [2024-07-24 23:17:08.269808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.638 [2024-07-24 23:17:08.270028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.638 [2024-07-24 23:17:08.270247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.638 [2024-07-24 23:17:08.270254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.270261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.273822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.283040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.283617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.283633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.283645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.283870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.284089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.284097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.284103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.287649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.296860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.297586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.297622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.297633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.297882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.298105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.298113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.298121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.301685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.310694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.311338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.311375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.311385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.311624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.311856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.311865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.311872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.315425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.324629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.325212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.325247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.325259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.325502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.325725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.325739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.325747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.329310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.338598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.339290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.339328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.339338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.339577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.339809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.339819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.339826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.343376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.352593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.353336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.353373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.353383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.353624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.353857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.353867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.353875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.357431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.366447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.367164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.367201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.367211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.367450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.367673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.367681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.367688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.371244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.380269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.381030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.381067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.381077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.381316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.381539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.381547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.381554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.385113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.394124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.394857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.394894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.394906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.395147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.395370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.639 [2024-07-24 23:17:08.395378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.639 [2024-07-24 23:17:08.395386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.639 [2024-07-24 23:17:08.398945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.639 [2024-07-24 23:17:08.407959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.639 [2024-07-24 23:17:08.408599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.639 [2024-07-24 23:17:08.408617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.639 [2024-07-24 23:17:08.408625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.639 [2024-07-24 23:17:08.408851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.639 [2024-07-24 23:17:08.409072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.640 [2024-07-24 23:17:08.409079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.640 [2024-07-24 23:17:08.409086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.640 [2024-07-24 23:17:08.412630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.640 [2024-07-24 23:17:08.421853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.640 [2024-07-24 23:17:08.422542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.640 [2024-07-24 23:17:08.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.640 [2024-07-24 23:17:08.422588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.640 [2024-07-24 23:17:08.422841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.902 [2024-07-24 23:17:08.423065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.902 [2024-07-24 23:17:08.423076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.902 [2024-07-24 23:17:08.423085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.902 [2024-07-24 23:17:08.426641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.902 [2024-07-24 23:17:08.435655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.902 [2024-07-24 23:17:08.436274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-24 23:17:08.436293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.902 [2024-07-24 23:17:08.436300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.902 [2024-07-24 23:17:08.436520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.902 [2024-07-24 23:17:08.436739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.902 [2024-07-24 23:17:08.436746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.902 [2024-07-24 23:17:08.436762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.902 [2024-07-24 23:17:08.440411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.902 [2024-07-24 23:17:08.449653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.902 [2024-07-24 23:17:08.450359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-24 23:17:08.450397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.902 [2024-07-24 23:17:08.450409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.902 [2024-07-24 23:17:08.450650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.902 [2024-07-24 23:17:08.450882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.902 [2024-07-24 23:17:08.450892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.902 [2024-07-24 23:17:08.450899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.902 [2024-07-24 23:17:08.454466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.902 [2024-07-24 23:17:08.463481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.902 [2024-07-24 23:17:08.464118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-24 23:17:08.464137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.902 [2024-07-24 23:17:08.464145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.902 [2024-07-24 23:17:08.464365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.902 [2024-07-24 23:17:08.464585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.902 [2024-07-24 23:17:08.464593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.902 [2024-07-24 23:17:08.464604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.902 [2024-07-24 23:17:08.468164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.902 [2024-07-24 23:17:08.477382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.478068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.478104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.478115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.478354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.478577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.478586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.478593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.482159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.491388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.492119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.492155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.492166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.492405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.492628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.492636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.492644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.496204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.505262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.506004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.506051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.506291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.506513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.506521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.506528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.510094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.519119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.519806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.519853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.520092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.520315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.520324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.520331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.523893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.533107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.533825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.533862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.533874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.534114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.534337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.534345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.534353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.537913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.546927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.547663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.547699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.547711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.547962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.548186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.548194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.548201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.551761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.560781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.561478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.561515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.561526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.561779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.562003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.562012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.562019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.565577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.574635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.575249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.575269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.575277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.575497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.575717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.575724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.575731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.579296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.588520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.589092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.589109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.589116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.589335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.589555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.589562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.589569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.593127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.602360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.603081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-24 23:17:08.603118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.903 [2024-07-24 23:17:08.603128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.903 [2024-07-24 23:17:08.603367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.903 [2024-07-24 23:17:08.603590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.903 [2024-07-24 23:17:08.603598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.903 [2024-07-24 23:17:08.603610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.903 [2024-07-24 23:17:08.607170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.903 [2024-07-24 23:17:08.616173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.903 [2024-07-24 23:17:08.616822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.616842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.616849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.617069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.617289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.617296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.617303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.904 [2024-07-24 23:17:08.620862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.904 [2024-07-24 23:17:08.630084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.904 [2024-07-24 23:17:08.630808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.630845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.630856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.631095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.631317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.631325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.631333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.904 [2024-07-24 23:17:08.634894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.904 [2024-07-24 23:17:08.643899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.904 [2024-07-24 23:17:08.644539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.644559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.644566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.644791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.645011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.645018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.645025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.904 [2024-07-24 23:17:08.648575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.904 [2024-07-24 23:17:08.657787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.904 [2024-07-24 23:17:08.658358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.658397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.658407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.658646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.658879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.658888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.658896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.904 [2024-07-24 23:17:08.662454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.904 [2024-07-24 23:17:08.671679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.904 [2024-07-24 23:17:08.672303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.672320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.672328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.672547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.672772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.672780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.672787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.904 [2024-07-24 23:17:08.676339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.904 [2024-07-24 23:17:08.685561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.904 [2024-07-24 23:17:08.686163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-24 23:17:08.686179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:50.904 [2024-07-24 23:17:08.686186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:50.904 [2024-07-24 23:17:08.686405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:50.904 [2024-07-24 23:17:08.686624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.904 [2024-07-24 23:17:08.686631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.904 [2024-07-24 23:17:08.686638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.169 [2024-07-24 23:17:08.690195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.169 [2024-07-24 23:17:08.699417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.169 [2024-07-24 23:17:08.700038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.169 [2024-07-24 23:17:08.700054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.169 [2024-07-24 23:17:08.700061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.169 [2024-07-24 23:17:08.700280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.169 [2024-07-24 23:17:08.700506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.169 [2024-07-24 23:17:08.700515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.169 [2024-07-24 23:17:08.700521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.169 [2024-07-24 23:17:08.704085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.169 [2024-07-24 23:17:08.713302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.169 [2024-07-24 23:17:08.713941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.169 [2024-07-24 23:17:08.713956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.169 [2024-07-24 23:17:08.713964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.169 [2024-07-24 23:17:08.714182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.169 [2024-07-24 23:17:08.714401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.169 [2024-07-24 23:17:08.714408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.169 [2024-07-24 23:17:08.714415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.169 [2024-07-24 23:17:08.717969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.169 [2024-07-24 23:17:08.727195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.169 [2024-07-24 23:17:08.727819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.169 [2024-07-24 23:17:08.727834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.169 [2024-07-24 23:17:08.727841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.169 [2024-07-24 23:17:08.728060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.169 [2024-07-24 23:17:08.728278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.169 [2024-07-24 23:17:08.728286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.169 [2024-07-24 23:17:08.728293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.169 [2024-07-24 23:17:08.731846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.169 [2024-07-24 23:17:08.741062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.169 [2024-07-24 23:17:08.741698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.169 [2024-07-24 23:17:08.741713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.169 [2024-07-24 23:17:08.741720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.169 [2024-07-24 23:17:08.741945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.169 [2024-07-24 23:17:08.742164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.169 [2024-07-24 23:17:08.742171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.169 [2024-07-24 23:17:08.742178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.169 [2024-07-24 23:17:08.745728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.754952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.755666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.755703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.755715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.755964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.756187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.756196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.756203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.759764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.768781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.769428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.769447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.769455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.769674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.769901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.769909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.769916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.773470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.782688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.783167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.783191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.783409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.783628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.783636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.783642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.787197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.796627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.797251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.797266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.797278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.797497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.797716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.797724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.797731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.801298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.810517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.811038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.811053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.811060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.811279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.811497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.811505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.811512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.815065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.824497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.825105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.825122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.825129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.825347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.825566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.825573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.825580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.829133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.838353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.838911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.838927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.838934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.839153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.839372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.839383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.839390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.842945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.852163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.852786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.852801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.852808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.853027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.853245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.853252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.853259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.856824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.866046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.866632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.866649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.866656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.866881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.867100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.867108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.867114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.870663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.170 [2024-07-24 23:17:08.879886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.170 [2024-07-24 23:17:08.880543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.170 [2024-07-24 23:17:08.880579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.170 [2024-07-24 23:17:08.880590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.170 [2024-07-24 23:17:08.880838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.170 [2024-07-24 23:17:08.881062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.170 [2024-07-24 23:17:08.881071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.170 [2024-07-24 23:17:08.881078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.170 [2024-07-24 23:17:08.884633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.171 [2024-07-24 23:17:08.893869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.171 [2024-07-24 23:17:08.894510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.171 [2024-07-24 23:17:08.894529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.171 [2024-07-24 23:17:08.894537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.171 [2024-07-24 23:17:08.894764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.171 [2024-07-24 23:17:08.894984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.171 [2024-07-24 23:17:08.894992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.171 [2024-07-24 23:17:08.894998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.171 [2024-07-24 23:17:08.898546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.171 [2024-07-24 23:17:08.907778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.171 [2024-07-24 23:17:08.908406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.171 [2024-07-24 23:17:08.908422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.171 [2024-07-24 23:17:08.908429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.171 [2024-07-24 23:17:08.908648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.171 [2024-07-24 23:17:08.908872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.171 [2024-07-24 23:17:08.908881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.171 [2024-07-24 23:17:08.908888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.171 [2024-07-24 23:17:08.912440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.171 [2024-07-24 23:17:08.921666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.171 [2024-07-24 23:17:08.922099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.171 [2024-07-24 23:17:08.922120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.171 [2024-07-24 23:17:08.922128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.171 [2024-07-24 23:17:08.922348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.171 [2024-07-24 23:17:08.922567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.171 [2024-07-24 23:17:08.922575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.171 [2024-07-24 23:17:08.922582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.171 [2024-07-24 23:17:08.926140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.171 [2024-07-24 23:17:08.935565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.171 [2024-07-24 23:17:08.936191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.171 [2024-07-24 23:17:08.936207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.171 [2024-07-24 23:17:08.936214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.171 [2024-07-24 23:17:08.936440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.171 [2024-07-24 23:17:08.936659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.171 [2024-07-24 23:17:08.936666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.171 [2024-07-24 23:17:08.936673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.171 [2024-07-24 23:17:08.940229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.171 [2024-07-24 23:17:08.949454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.171 [2024-07-24 23:17:08.950143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.171 [2024-07-24 23:17:08.950181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.171 [2024-07-24 23:17:08.950191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.171 [2024-07-24 23:17:08.950430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.171 [2024-07-24 23:17:08.950653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.171 [2024-07-24 23:17:08.950661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.171 [2024-07-24 23:17:08.950668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:08.954228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:08.963441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:08.964125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:08.964161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:08.964173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:08.964415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:08.964638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:08.964646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:08.964654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:08.968222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:08.977242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:08.977784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:08.977804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:08.977812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:08.978032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:08.978251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:08.978259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:08.978271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:08.981829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:08.991051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:08.991806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:08.991843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:08.991855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:08.992097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:08.992320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:08.992328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:08.992335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:08.995896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.004916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.005562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.005582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.005589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.005815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:09.006034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:09.006043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:09.006050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:09.009597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.018807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.019406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.019423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.019430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.019649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:09.019876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:09.019885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:09.019892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:09.023440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.032660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.033228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.033265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.033276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.033515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:09.033737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:09.033746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:09.033764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:09.037321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.046547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.047170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.047188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.047196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.047416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:09.047635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:09.047642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:09.047649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:09.051210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.060428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.061111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.061149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.061159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.061398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.464 [2024-07-24 23:17:09.061621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.464 [2024-07-24 23:17:09.061630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.464 [2024-07-24 23:17:09.061638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.464 [2024-07-24 23:17:09.065203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.464 [2024-07-24 23:17:09.074429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.464 [2024-07-24 23:17:09.074948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.464 [2024-07-24 23:17:09.074967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.464 [2024-07-24 23:17:09.074975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.464 [2024-07-24 23:17:09.075195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.075419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.075427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.075434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.078992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.088421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.089102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.089139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.089150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.089388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.089611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.089620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.089627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.093191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.102422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.103019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.103038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.103046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.103265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.103485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.103492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.103499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.107054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.116263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.116943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.116980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.116990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.117229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.117452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.117460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.117467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.121040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.130259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.130995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.131032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.131042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.131281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.131504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.131512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.131519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.135079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.144079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.144797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.144833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.144845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.145087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.145310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.145318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.145325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.148884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.157882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.158571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.158607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.158618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.158866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.159090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.159098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.159105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.162656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.171868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.172506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.172529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.172537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.172763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.172984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.172991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.172998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.176543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.185746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.186478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.186515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.186525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.186775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.186998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.187007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.187014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.190565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.199561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.200279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.200316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.200326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.200565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.200805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.465 [2024-07-24 23:17:09.200815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.465 [2024-07-24 23:17:09.200822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.465 [2024-07-24 23:17:09.204374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.465 [2024-07-24 23:17:09.213370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.465 [2024-07-24 23:17:09.214019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.465 [2024-07-24 23:17:09.214038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.465 [2024-07-24 23:17:09.214045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.465 [2024-07-24 23:17:09.214266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.465 [2024-07-24 23:17:09.214489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.466 [2024-07-24 23:17:09.214497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.466 [2024-07-24 23:17:09.214504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.466 [2024-07-24 23:17:09.218054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.466 [2024-07-24 23:17:09.227278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.466 [2024-07-24 23:17:09.227876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.466 [2024-07-24 23:17:09.227893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.466 [2024-07-24 23:17:09.227900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.466 [2024-07-24 23:17:09.228119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.466 [2024-07-24 23:17:09.228338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.466 [2024-07-24 23:17:09.228345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.466 [2024-07-24 23:17:09.228352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.466 [2024-07-24 23:17:09.231910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.241130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.241846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.241883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.241893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.242132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.242355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.242363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.242371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.245931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.254937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.255652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.255700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.255948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.256172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.256181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.256188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1038662 Killed "${NVMF_APP[@]}" "$@" 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.730 [2024-07-24 23:17:09.259744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1040371 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1040371 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1040371 ']' 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:51.730 23:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.730 [2024-07-24 23:17:09.268766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.269418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.269436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.269444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.269664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.269890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.269900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.269907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.273462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.282686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.283265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.283281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.283288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.283507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.283726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.283733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.283740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.287306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.296532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.297168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.297184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.297191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.297410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.297629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.297636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.297642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.301210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.310433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.311091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.311107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.311115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.311334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.311553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.311560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.311567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.315123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.324333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.324969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.324986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.324994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.325141] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:28:51.730 [2024-07-24 23:17:09.325198] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.730 [2024-07-24 23:17:09.325214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.325433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.730 [2024-07-24 23:17:09.325441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.730 [2024-07-24 23:17:09.325448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.730 [2024-07-24 23:17:09.329003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.730 [2024-07-24 23:17:09.338228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.730 [2024-07-24 23:17:09.338856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.730 [2024-07-24 23:17:09.338873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.730 [2024-07-24 23:17:09.338880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.730 [2024-07-24 23:17:09.339099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.730 [2024-07-24 23:17:09.339318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.339326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.339333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.342886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.352107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.352834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.352871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.352883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.353126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.353349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.353358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.353365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.356925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.731 [2024-07-24 23:17:09.366004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.366742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.366786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.366797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.367037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.367259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.367268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.367275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.370830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.379834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.380571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.380608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.380623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.380872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.381096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.381106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.381113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.384663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.393666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.394395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.394432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.394442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.394681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.394912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.394921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.394930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.398480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.407497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.408237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.408273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.408285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.408526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.408749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.408766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.408773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.412323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.414535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.731 [2024-07-24 23:17:09.421335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.422030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.422067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.422078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.422317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.422547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.422556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.422563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.426118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.435329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.435960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.435979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.435987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.436207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.436426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.436433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.436440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.439991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.449199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.449900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.449938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.449949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.450189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.450412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.450420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.450428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.453990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.462996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.731 [2024-07-24 23:17:09.463659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.731 [2024-07-24 23:17:09.463676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.731 [2024-07-24 23:17:09.463684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.731 [2024-07-24 23:17:09.464069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.731 [2024-07-24 23:17:09.464294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.731 [2024-07-24 23:17:09.464301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.731 [2024-07-24 23:17:09.464309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.731 [2024-07-24 23:17:09.467867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.731 [2024-07-24 23:17:09.468035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.731 [2024-07-24 23:17:09.468057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.731 [2024-07-24 23:17:09.468063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.731 [2024-07-24 23:17:09.468068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.731 [2024-07-24 23:17:09.468073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.731 [2024-07-24 23:17:09.468174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.732 [2024-07-24 23:17:09.468310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.732 [2024-07-24 23:17:09.468311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.732 [2024-07-24 23:17:09.476866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.732 [2024-07-24 23:17:09.477593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.732 [2024-07-24 23:17:09.477631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.732 [2024-07-24 23:17:09.477642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.732 [2024-07-24 23:17:09.477893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.732 [2024-07-24 23:17:09.478117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.732 [2024-07-24 23:17:09.478125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.732 [2024-07-24 23:17:09.478133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.732 [2024-07-24 23:17:09.481684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.732 [2024-07-24 23:17:09.490687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.732 [2024-07-24 23:17:09.491418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.732 [2024-07-24 23:17:09.491457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.732 [2024-07-24 23:17:09.491467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.732 [2024-07-24 23:17:09.491708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.732 [2024-07-24 23:17:09.491938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.732 [2024-07-24 23:17:09.491947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.732 [2024-07-24 23:17:09.491955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.732 [2024-07-24 23:17:09.495508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.732 [2024-07-24 23:17:09.504531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.732 [2024-07-24 23:17:09.505248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.732 [2024-07-24 23:17:09.505287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.732 [2024-07-24 23:17:09.505297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.732 [2024-07-24 23:17:09.505538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.732 [2024-07-24 23:17:09.505773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.732 [2024-07-24 23:17:09.505783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.732 [2024-07-24 23:17:09.505790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.732 [2024-07-24 23:17:09.509342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.518342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.519078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.519115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.519127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.519369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.519592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.519600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.519608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.523180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.532186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.532860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.532897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.532909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.533151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.533374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.533383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.533391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.536950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.546165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.546799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.546835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.546847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.547090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.547313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.547321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.547328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.550893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.560103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.560847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.560897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.561139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.561361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.561370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.561377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.564938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.573941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.574664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.574700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.574712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.574962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.575186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.575194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.575202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.578753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.587750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.588499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.588536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.588547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.588793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.589016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.589024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.589032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.592581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.601581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.602289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.602326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.994 [2024-07-24 23:17:09.602341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.994 [2024-07-24 23:17:09.602580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.994 [2024-07-24 23:17:09.602812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.994 [2024-07-24 23:17:09.602822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.994 [2024-07-24 23:17:09.602829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.994 [2024-07-24 23:17:09.606379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.994 [2024-07-24 23:17:09.615380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.994 [2024-07-24 23:17:09.615853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.994 [2024-07-24 23:17:09.615890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.615902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.616144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.616366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.616375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.616382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.619944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.629376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.630108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.630145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.630157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.630396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.630618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.630627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.630634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.634193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.643197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.643860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.643897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.643909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.644152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.644375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.644389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.644397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.647959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.657173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.657830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.657867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.657879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.658119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.658342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.658351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.658358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.661917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.671129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.671738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.671762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.671770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.671990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.672208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.672216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.672223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.675770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.684975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.685460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.685476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.685483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.685702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.685925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.685934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.685940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.689487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.698908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.699554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.699569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.699576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.699799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.700019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.700026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.700033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.703585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.712791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.713399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.713413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.713421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.713639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.713863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.713871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.713878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.717420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.726632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.727087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.727102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.727109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.727328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.727546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.727554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.727561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.731108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.740519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.741139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.741154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.995 [2024-07-24 23:17:09.741161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.995 [2024-07-24 23:17:09.741383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.995 [2024-07-24 23:17:09.741602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.995 [2024-07-24 23:17:09.741610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.995 [2024-07-24 23:17:09.741616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.995 [2024-07-24 23:17:09.745162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.995 [2024-07-24 23:17:09.754363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.995 [2024-07-24 23:17:09.754959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.995 [2024-07-24 23:17:09.754996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.996 [2024-07-24 23:17:09.755008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.996 [2024-07-24 23:17:09.755248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.996 [2024-07-24 23:17:09.755471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.996 [2024-07-24 23:17:09.755480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.996 [2024-07-24 23:17:09.755487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.996 [2024-07-24 23:17:09.759046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.996 [2024-07-24 23:17:09.768255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.996 [2024-07-24 23:17:09.769074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.996 [2024-07-24 23:17:09.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:51.996 [2024-07-24 23:17:09.769121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:51.996 [2024-07-24 23:17:09.769360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:51.996 [2024-07-24 23:17:09.769582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.996 [2024-07-24 23:17:09.769590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.996 [2024-07-24 23:17:09.769598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.996 [2024-07-24 23:17:09.773157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.782158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.782853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.782890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.782902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.783145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.783367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.783375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.783387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.258 [2024-07-24 23:17:09.786947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.795952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.796609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.796628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.796636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.796862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.797082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.797089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.797096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.258 [2024-07-24 23:17:09.800640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.809853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.810572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.810610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.810620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.810866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.811089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.811097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.811105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.258 [2024-07-24 23:17:09.814654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.823663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.824355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.824392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.824402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.824641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.824872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.824881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.824888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.258 [2024-07-24 23:17:09.828438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.837647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.838369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.838406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.838417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.838655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.838886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.838895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.838902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.258 [2024-07-24 23:17:09.842452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.258 [2024-07-24 23:17:09.851453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.258 [2024-07-24 23:17:09.852170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.258 [2024-07-24 23:17:09.852207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.258 [2024-07-24 23:17:09.852217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.258 [2024-07-24 23:17:09.852456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.258 [2024-07-24 23:17:09.852679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.258 [2024-07-24 23:17:09.852687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.258 [2024-07-24 23:17:09.852694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.856251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.865251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.865871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.865908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.865920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.866160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.866383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.866392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.866399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.869956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.879163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.879734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.879777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.879789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.880032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.880259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.880267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.880275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.883830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.893040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.893696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.893714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.893722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.893947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.894167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.894174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.894181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.897725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.906946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.907566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.907581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.907588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.907813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.908032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.908040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.908046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.911588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.920839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.921472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.921479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.921698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.921921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.921930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.921936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.925492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.934698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.935372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.935409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.935420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.935658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.935891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.935900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.935908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.939458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.948670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.949435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.949472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.949484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.949725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.949957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.949966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.949974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.953525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.962528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.963044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.963081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.963092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.963332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.963554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.963562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.963570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.967129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.976342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.976976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.977013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.977028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.977268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.977490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.977499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.977506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.981064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.259 [2024-07-24 23:17:09.990279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.259 [2024-07-24 23:17:09.991047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.259 [2024-07-24 23:17:09.991084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.259 [2024-07-24 23:17:09.991095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.259 [2024-07-24 23:17:09.991334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.259 [2024-07-24 23:17:09.991557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.259 [2024-07-24 23:17:09.991565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.259 [2024-07-24 23:17:09.991573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.259 [2024-07-24 23:17:09.995131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.260 [2024-07-24 23:17:10.004618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.260 [2024-07-24 23:17:10.005251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.260 [2024-07-24 23:17:10.005270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.260 [2024-07-24 23:17:10.005278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.260 [2024-07-24 23:17:10.005498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.260 [2024-07-24 23:17:10.005718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.260 [2024-07-24 23:17:10.005726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.260 [2024-07-24 23:17:10.005733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.260 [2024-07-24 23:17:10.009290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.260 [2024-07-24 23:17:10.018497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.260 [2024-07-24 23:17:10.019074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.260 [2024-07-24 23:17:10.019111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.260 [2024-07-24 23:17:10.019122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.260 [2024-07-24 23:17:10.019361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.260 [2024-07-24 23:17:10.019589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.260 [2024-07-24 23:17:10.019598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.260 [2024-07-24 23:17:10.019606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.260 [2024-07-24 23:17:10.023175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.260 [2024-07-24 23:17:10.032942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.260 [2024-07-24 23:17:10.033625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.260 [2024-07-24 23:17:10.033663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.260 [2024-07-24 23:17:10.033675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.260 [2024-07-24 23:17:10.033927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.260 [2024-07-24 23:17:10.034152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.260 [2024-07-24 23:17:10.034160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.260 [2024-07-24 23:17:10.034169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.260 [2024-07-24 23:17:10.037720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 [2024-07-24 23:17:10.046936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.047603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.522 [2024-07-24 23:17:10.047621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.522 [2024-07-24 23:17:10.047629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.522 [2024-07-24 23:17:10.047855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.522 [2024-07-24 23:17:10.048075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.522 [2024-07-24 23:17:10.048084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.522 [2024-07-24 23:17:10.048090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.522 [2024-07-24 23:17:10.051636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 [2024-07-24 23:17:10.060848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.061553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.522 [2024-07-24 23:17:10.061589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.522 [2024-07-24 23:17:10.061600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.522 [2024-07-24 23:17:10.061847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.522 [2024-07-24 23:17:10.062071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.522 [2024-07-24 23:17:10.062080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.522 [2024-07-24 23:17:10.062087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.522 [2024-07-24 23:17:10.065636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 [2024-07-24 23:17:10.074645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.075413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.522 [2024-07-24 23:17:10.075449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.522 [2024-07-24 23:17:10.075461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.522 [2024-07-24 23:17:10.075703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.522 [2024-07-24 23:17:10.075935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.522 [2024-07-24 23:17:10.075944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.522 [2024-07-24 23:17:10.075952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.522 [2024-07-24 23:17:10.079501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:52.522 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:52.522 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.522 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:52.522 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.522 [2024-07-24 23:17:10.088501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.089136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.522 [2024-07-24 23:17:10.089156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.522 [2024-07-24 23:17:10.089164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.522 [2024-07-24 23:17:10.089384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.522 [2024-07-24 23:17:10.089603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.522 [2024-07-24 23:17:10.089611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.522 [2024-07-24 23:17:10.089618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.522 [2024-07-24 23:17:10.093170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 [2024-07-24 23:17:10.102391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.103059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.522 [2024-07-24 23:17:10.103097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.522 [2024-07-24 23:17:10.103109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.522 [2024-07-24 23:17:10.103349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.522 [2024-07-24 23:17:10.103573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.522 [2024-07-24 23:17:10.103581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.522 [2024-07-24 23:17:10.103589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.522 [2024-07-24 23:17:10.107158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.522 [2024-07-24 23:17:10.116384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.522 [2024-07-24 23:17:10.117107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.117145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.117156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.117395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.117618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.117627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.117635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 [2024-07-24 23:17:10.121194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.523 [2024-07-24 23:17:10.129977] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.523 [2024-07-24 23:17:10.130207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 [2024-07-24 23:17:10.130764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.130784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.130792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.131012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.131231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.131240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.131248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 [2024-07-24 23:17:10.134798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 [2024-07-24 23:17:10.144004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 [2024-07-24 23:17:10.144642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.144657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.144665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.144890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.145109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.145118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.145124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.523 [2024-07-24 23:17:10.148677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 [2024-07-24 23:17:10.157885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 [2024-07-24 23:17:10.158481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.158518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.158528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.158776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.159001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.159010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.159017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 [2024-07-24 23:17:10.162567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 Malloc0 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.523 [2024-07-24 23:17:10.171787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 [2024-07-24 23:17:10.172381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.172418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.172429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.172668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.172898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.172909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.172916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 [2024-07-24 23:17:10.176469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.523 [2024-07-24 23:17:10.185681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 [2024-07-24 23:17:10.186411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.523 [2024-07-24 23:17:10.186452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1096540 with addr=10.0.0.2, port=4420 00:28:52.523 [2024-07-24 23:17:10.186462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096540 is same with the state(5) to be set 00:28:52.523 [2024-07-24 23:17:10.186701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096540 (9): Bad file descriptor 00:28:52.523 [2024-07-24 23:17:10.186932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.523 [2024-07-24 23:17:10.186942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.523 [2024-07-24 23:17:10.186949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.523 [2024-07-24 23:17:10.190499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.523 [2024-07-24 23:17:10.196663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.523 [2024-07-24 23:17:10.199522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.523 23:17:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1039098 00:28:52.523 [2024-07-24 23:17:10.246898] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:02.522 00:29:02.522 Latency(us) 00:29:02.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.522 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:02.522 Verification LBA range: start 0x0 length 0x4000 00:29:02.522 Nvme1n1 : 15.01 8655.83 33.81 9672.06 0.00 6958.38 1078.61 14854.83 00:29:02.522 =================================================================================================================== 00:29:02.522 Total : 8655.83 33.81 9672.06 0.00 6958.38 1078.61 14854.83 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:02.522 rmmod nvme_tcp 00:29:02.522 rmmod nvme_fabrics 00:29:02.522 rmmod nvme_keyring 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1040371 ']' 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1040371 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1040371 ']' 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1040371 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040371 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040371' 00:29:02.522 killing process with pid 1040371 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1040371 00:29:02.522 23:17:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1040371 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.522 23:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:03.464 00:29:03.464 real 0m28.786s 00:29:03.464 user 1m2.813s 00:29:03.464 sys 0m7.931s 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.464 ************************************ 00:29:03.464 END TEST nvmf_bdevperf 00:29:03.464 ************************************ 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.464 ************************************ 00:29:03.464 START TEST nvmf_target_disconnect 00:29:03.464 ************************************ 00:29:03.464 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:03.725 * Looking for test storage... 00:29:03.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.725 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:03.726 23:17:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:11.866 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:11.866 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:11.866 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:11.867 Found net devices under 0000:31:00.0: cvl_0_0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:11.867 Found net devices under 0000:31:00.1: cvl_0_1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:11.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:29:11.867 00:29:11.867 --- 10.0.0.2 ping statistics --- 00:29:11.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.867 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:29:11.867 00:29:11.867 --- 10.0.0.1 ping statistics --- 00:29:11.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.867 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:11.867 ************************************ 00:29:11.867 START TEST nvmf_target_disconnect_tc1 00:29:11.867 ************************************ 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.867 [2024-07-24 23:17:29.536824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.867 [2024-07-24 23:17:29.536895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fa4b0 with addr=10.0.0.2, port=4420 00:29:11.867 [2024-07-24 23:17:29.536929] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:11.867 [2024-07-24 23:17:29.536944] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:11.867 [2024-07-24 23:17:29.536952] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:11.867 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:11.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:11.867 Initializing NVMe Controllers 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.867 00:29:11.867 real 0m0.118s 00:29:11.867 user 0m0.040s 00:29:11.867 sys 0m0.077s 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.867 ************************************ 00:29:11.867 END TEST nvmf_target_disconnect_tc1 00:29:11.867 ************************************ 00:29:11.867 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:11.868 ************************************ 00:29:11.868 START TEST nvmf_target_disconnect_tc2 00:29:11.868 ************************************ 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1046774 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1046774 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1046774 ']' 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.868 23:17:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.129 [2024-07-24 23:17:29.690064] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:12.129 [2024-07-24 23:17:29.690121] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.129 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.129 [2024-07-24 23:17:29.785638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.129 [2024-07-24 23:17:29.882137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.129 [2024-07-24 23:17:29.882195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.129 [2024-07-24 23:17:29.882204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.129 [2024-07-24 23:17:29.882211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.129 [2024-07-24 23:17:29.882217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.129 [2024-07-24 23:17:29.882385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.129 [2024-07-24 23:17:29.882522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.129 [2024-07-24 23:17:29.882682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.129 [2024-07-24 23:17:29.882684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 Malloc0 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 [2024-07-24 23:17:30.569540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.071 [2024-07-24 23:17:30.609960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.071 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1047121 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:13.072 23:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.072 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.986 23:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1046774 00:29:14.986 23:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Write completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Read completed with error (sct=0, sc=8) 00:29:14.986 starting I/O failed 00:29:14.986 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 [2024-07-24 23:17:32.643249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Read completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 Write completed with error (sct=0, sc=8) 00:29:14.987 starting I/O failed 00:29:14.987 [2024-07-24 23:17:32.643495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:14.987 [2024-07-24 23:17:32.643779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.643797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.644068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.644078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.644408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.644419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.644759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.644769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.645093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.645103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.645417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.645427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.645673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.645683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.645905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.645917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.646326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.646336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.646735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.646745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.647142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.647152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.647544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.647558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.647959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.647969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.648358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.648368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.648762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.648772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.649204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.649213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.987 [2024-07-24 23:17:32.649533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.987 [2024-07-24 23:17:32.649543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.987 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.649933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.649944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.650178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.650188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.650368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.650379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.650669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.650679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.651108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.651118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.651440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.651450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.651833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.651843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.652124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.652134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.652483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.652493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.652821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.652831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.653236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.653246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.653637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.653647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.654044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.654054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.654434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.654443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.654834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.654844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.655175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.655185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.655582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.655592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.655820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.655831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.656145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.656155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.656540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.656549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.656956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.656966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.657411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.657421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.657759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.657769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.658177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.658187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.658579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.658589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.659082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.659118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.659401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.659413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.659772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.659783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.660127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.660136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.660415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.660424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.660869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.661293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.661301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.661633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.661642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.662031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.662040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.662384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.662397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.662801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.662812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.988 [2024-07-24 23:17:32.663144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.988 [2024-07-24 23:17:32.663153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.988 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.663498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.663507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.663896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.663910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.664283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.664292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.664657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.664666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.664979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.664988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.665388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.665397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.665760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.665770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.666136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.666146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.666535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.666546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.666882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.666895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.667218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.667230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.667587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.667600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.667985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.667997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.668352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.668363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.668763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.668775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.669142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.669153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.669537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.669549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.669965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.670326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.670337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.670744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.670774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.671145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.671157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.671551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.671563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.671907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.671926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.672291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.672302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.672659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.672670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.673044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.673056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.673443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.673454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.673799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.673810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.674212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.674223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.674544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.674934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.674946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.675346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.675357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.675675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.675686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.676025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.676037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.676380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.676391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.676800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.676811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.677190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.989 [2024-07-24 23:17:32.677202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.989 qpair failed and we were unable to recover it. 00:29:14.989 [2024-07-24 23:17:32.677476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.677493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.677860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.677876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.678269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.678284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.678696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.678711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.679051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.679067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.679403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.679418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.679801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.679817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.680200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.680215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.680592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.680607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.680933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.680949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.681307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.681322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.681704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.681719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.682100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.682116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.682465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.682479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.682841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.682857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.683235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.683251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.683622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.683637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.684007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.684024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.684334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.684349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.684740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.684760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.685144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.685159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.685395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.685415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.685819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.685835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.686092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.686109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.686466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.686836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.686852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.687265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.687280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.687670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.687690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.688064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.688085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.688462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.688482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.688731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.688760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.689153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.689173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.689550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.689570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.689976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.689996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.690392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.690412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.690827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.690847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.691249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.990 [2024-07-24 23:17:32.691268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.990 qpair failed and we were unable to recover it. 00:29:14.990 [2024-07-24 23:17:32.691657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.691676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.692062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.692083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.692384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.692403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.692776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.692800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.693231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.693251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.693645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.693664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.694086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.694107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.694499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.694518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.694842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.694863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.695274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.695293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.695709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.695728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.696129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.696150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.696542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.696561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.696958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.696979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.697372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.697391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.697854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.697874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.698289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.698309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.698714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.699097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.699116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.699510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.699529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.699954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.699975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.700372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.700391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.700715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.700734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.701119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.701139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.701562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.701588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.702044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.702072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.702464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.702491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.702984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.703086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.991 qpair failed and we were unable to recover it. 00:29:14.991 [2024-07-24 23:17:32.703573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.991 [2024-07-24 23:17:32.703607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.704073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.704103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.704528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.704556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.704968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.704996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.705392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.705419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.705791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.705820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.706248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.706275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.706683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.706709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.706989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.707016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.707421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.707448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.707775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.707808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.708197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.708224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.708530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.708565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.708968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.708997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.709390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.709416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.709838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.709873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.710256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.710285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.710540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.710571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.710991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.711020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.711417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.711444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.711850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.711879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.712287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.712313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.712706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.712732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.712937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.712968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.713381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.713408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.713821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.713850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.714254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.714281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.714699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.714726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.715138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.715166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.715635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.715662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.716139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.716167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.716563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.716590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.716896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.716927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.717351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.717378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.717732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.717771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.718170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.718197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.718604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.718631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.992 [2024-07-24 23:17:32.719031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.992 [2024-07-24 23:17:32.719059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.992 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.719451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.719478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.719870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.720311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.720338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.720725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.720761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.721094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.721123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.721398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.721426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.721767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.721796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.722198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.722225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.722514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.722543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.722956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.722985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.723387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.723414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.723716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.723744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.724172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.724199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.724611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.724638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.724910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.724939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.725333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.725360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.725773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.725801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.726236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.726269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.726678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.726705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.727157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.727186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.727574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.727601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.727885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.727917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.728338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.728365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.728768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.728796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.729206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.729642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.729668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.730121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.730149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.730569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.730989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.731017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.731409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.731436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.731831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.731861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.732154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.732186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.732610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.732637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.733076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.733104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.733520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.733547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-07-24 23:17:32.733950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-07-24 23:17:32.733979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.734378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.734405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.734815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.734842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.735153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.735482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.735512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.735926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.735955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.736354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.736381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.736782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.736810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.737166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.737192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.737496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.737523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.737938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.737966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.738393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.738420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.738810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.738838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.739117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.739148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.739482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.739510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.739917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.739946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.740254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.740284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.740682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.740709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.741064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.741092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.741484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.741510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.741931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.741958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.742264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.742291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.742712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.742745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.743051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.743081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.743479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.743505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.743730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.744190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.744217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.744513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.744539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.744845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.744874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.745266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.745292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.745718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.746211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.746648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.746675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.746986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.747014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.747443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.747879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.747907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.748329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.748356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-07-24 23:17:32.748744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-07-24 23:17:32.748783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.749184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.749680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.749706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.750099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.750127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.750424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.750453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.750875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.750903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.751296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.751323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.751732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.751767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.752158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.752184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.752601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.752627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.753018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.753046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.753463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.753489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.753793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.753821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.754304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.754330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.754750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.754788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.755233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.755259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.755530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.755559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.755942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.755971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.756410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.756436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.756717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.756745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.757176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.757204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.757594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.757621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.758108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.758136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.758542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.758568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.758975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.759003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.759452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.759869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.759898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.760305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.760331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.760708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.760734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.761041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.761069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.761363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.761393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.761795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.761822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.762218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.762244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.762637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.762664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.763002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.763030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.763433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.763459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.763883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.763912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-07-24 23:17:32.764252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-07-24 23:17:32.764279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.764699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.764726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.765019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.765051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.765351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.765381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.765790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.765818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.766114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.766142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.766601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.766627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.767020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.767455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.767482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-07-24 23:17:32.767787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-07-24 23:17:32.767816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.768233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.768260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.768650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.768677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.768891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.768921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.769330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.769357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.769749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.769785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.770230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.770258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.770546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.770577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.770960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.770988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.771274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.771303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.771620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.771646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.772080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.772108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.772516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.772542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.772969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.772997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.773408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.773435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.773846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.773874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.774292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.774319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.774736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.774787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.775233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.775260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.775652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.775684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.775982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.776010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.776341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.776368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.776799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.776826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.777213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.777240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.777641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.777668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.778064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.778091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.778391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.778420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.778869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.779247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-24 23:17:32.779275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-24 23:17:32.779640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.779666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.780067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.780096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.780384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.780414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.780799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.780827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.781237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.781264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.781603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.781629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.782088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.782116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.782525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.782551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.782911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.782939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.783381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.783407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.783815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.783844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.784134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.784160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.784537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.784564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.784951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.784978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.785379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.785405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.785796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.785823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.786277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.786304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.786608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.786635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.787037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.787065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.787475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.787502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.787848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.787878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.788315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.788341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.788760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.788788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.789198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.789224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.789534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.789564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.789961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.789988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.790400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.790426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.790795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.790823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.791230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.791256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.791652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.791678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.791983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.792017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.792395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.792421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.792877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.792904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.793329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.793355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.793768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.793795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.794164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.794190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-24 23:17:32.794480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-24 23:17:32.794511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.794857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.794885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.795308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.795335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.795727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.795763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.796185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.796212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.796609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.796635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.797124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.797151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.797547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.797573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.797856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.797887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.798167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.798196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.798481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.798507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.798919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.798947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.799341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.799367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.799771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.799800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.800201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.800228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.800635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.800661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.801099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.801128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.801524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.801550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.801938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.801965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.802371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.802398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.802677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.802706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.803078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.803107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.803493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.803520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.803933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.803960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.804357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.804384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.804851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.804879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.805266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.805293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.805703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.805730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.806092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.806119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.806534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.806560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.807015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.807043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.807432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.807458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.807926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.807954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.808136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.808166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.808464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.808494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.808905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.808934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.809304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.809331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-24 23:17:32.809801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-24 23:17:32.809828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.810240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.810267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.810564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.810593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.811010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.811038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.811431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.811457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.811908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.811935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.812351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.812378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.812655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.812685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.813106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.813541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.813568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.813977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.814005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.814396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.814424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.814775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.814803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.815212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.815239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.815631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.815658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.815963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.815995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.816403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.816430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.816767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.816795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.817197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.817224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.817642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.817669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.818080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.818109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.818514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.818541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.818844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.818875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.819251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.819279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.819654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.819692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.820067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.820095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.820506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.820533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.820954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.820981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.821374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.821400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.821806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.821834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.822197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.822224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.822529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.822555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.822980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.823007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.823424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.823450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.823861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.823888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.824171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.824198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.824613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.824639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.825041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-24 23:17:32.825068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-24 23:17:32.825479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.825867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.825894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.826332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.826359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.826698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.827140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.827168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.827560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.827586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.828004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.828032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.828432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.828459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.828868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.828895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.829301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.829327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.829718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.829744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.830223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.830250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.830671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.830697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.831073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.831102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.831549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.831575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.831972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.832001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.832409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.832435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.832826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.832853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.833246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.833274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.833687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.833713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.834031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.834063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.834437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.834464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.834853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.834880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.835290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.835317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.835707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.835733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.836013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.836043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.836544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.836577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.836958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.836987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.837466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.837492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.837801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.837829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-24 23:17:32.838228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-24 23:17:32.838255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.838677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.838703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.839113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.839141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.839550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.839577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.840047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.840074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.840465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.840491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.840938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.840966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.841359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.841385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.841779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.841807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.842214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.842240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.842520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.842546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.842875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.842903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.843229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.843255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.843456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.843486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.843899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.843928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.844347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.844374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.844682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.844711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.845113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.845141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.845481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.845508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.845969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.845997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.846294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.846320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.846731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.846771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.847064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.847093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.847315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.847345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.847769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.847797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.848201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.848227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.848566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.848592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.849003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.849030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.849447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.849474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.849882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.850305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.850332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.850769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.850797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.851219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.851246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.851669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.851695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.852097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.852124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.852525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.852551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.852840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-24 23:17:32.852876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-24 23:17:32.853347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.853373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.853773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.853801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.854231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.854258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.854674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.854699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.855127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.855155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.855537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.855563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.855858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.855892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.856293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.856320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.856733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.856767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.857175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.857201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.857596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.857623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.857939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.857967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.858360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.858386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.858845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.858873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.859283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.859526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.859553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.860024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.860051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.860462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.860488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.860961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.860988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.861414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.861441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.861852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.861880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.862306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.862333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.862628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.862658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.863062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.863090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.863377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.863825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.863853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.864251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.864278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.864674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.864701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.865103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.865131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.865466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.865493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.865969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.865996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.866393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.866419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.866838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.866866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.867149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.867177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.867626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.867652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.868050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.868078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.868490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.868517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-24 23:17:32.868924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-24 23:17:32.868952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.869336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.869363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.869788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.869822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.870218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.870244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.870624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.870651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.871007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.871035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.871425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.871451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.871843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.871871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.872220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.872247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.872652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.872678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.872973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.873003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.873401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.873428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.873856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.873884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.874239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.874265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.874698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.874724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.875217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.875246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.875687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.875714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.876140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.876168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.876626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.877079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.877107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.877522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.877848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.877880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.878300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.878326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.878750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.878792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.879257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.879284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.879807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.879848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.880201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.880228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.880537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.880563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.880855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.880886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.881248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.881730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.881768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.882165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.882191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.882496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.882522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.882942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.882970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.883274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.883301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.883749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.883785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.884210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.884237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.884663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-24 23:17:32.884689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-24 23:17:32.885121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.885149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.885553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.885581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.886020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.886110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.886617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.886651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.887045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.887407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.887435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.887844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.887873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.888282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.888309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.888667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.888694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.889104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.889134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.889582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.889609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.889811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.889845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.890293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.890320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.890739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.890779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.891285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.891312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.891734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.891778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.892189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.892217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.892628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.892655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.893048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.893077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.893359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.893389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.893802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.893829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.894242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.894269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.894741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.894778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.895177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.895203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.895644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.895671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.896085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.896113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.896404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.896438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.896859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.896888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.897224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.897250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.897643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.897671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.898064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.898092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.898488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.898912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.898940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.899387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.899414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.899808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.899835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.900254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.900280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-24 23:17:32.900594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-24 23:17:32.900620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.901041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.901070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.901484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.901511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.901923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.901951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.902357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.902383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.902797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.902825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.903243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.903270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.903674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.903700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.904111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.904150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.904526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.905027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.905056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.905512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.905539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.905962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.906408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.906434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.906850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.906878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.907273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.907301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.907737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.907772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.908189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.908216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.908629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.908655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.909065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.909093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.909511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.909538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.909965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.909992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.910297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.910327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.910616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.910647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.911057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.911085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.911389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.911416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.911816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.911843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.912144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.912636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.912663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.913049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.913077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.913496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.913523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.913936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.913963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.914279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-24 23:17:32.914309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-24 23:17:32.914591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.914618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.915038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.915066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.915363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.915394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.915790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.915819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.916071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.916097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.916377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.916403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.916784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.917208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.917236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.917518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.917548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.917947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.917975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.918391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.918417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.918818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.919243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.919684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.919711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.920103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.920131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.920551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.920584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.920969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.920997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.921313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.921340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.921651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.921678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.922114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.922142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.922553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.922579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.923001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.923029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.923436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.923463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.923872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.923900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.924316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.924342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.924636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.924666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.925067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.925096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.925481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.925508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.925931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.925958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.926294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.926322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.926793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.926822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.927211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.927239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.927651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.927678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.928075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.928103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.928544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.928873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-24 23:17:32.928905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-24 23:17:32.929296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.929323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.929808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.929836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.930267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.930293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.930727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.930771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.931191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.931218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.931641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.931667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.932041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.932072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.932428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.932454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.932800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.932829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.933238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.933673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.933699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.934169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.934198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.934612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.934639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.934895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.934921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.935204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.935230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.935568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.935595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.935999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.936027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.936456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.936483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.936871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.936899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.937269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.937676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.937703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.938112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.938140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.938540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.938567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.938987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.939016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.939431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.939458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.939877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.939905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.940290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.940316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.940738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.940777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.941189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.941216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.941566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.941600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.941996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.942024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.942419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.942446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.942861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.942889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.943299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.943326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.943746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.943783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.944225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.944252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.944670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-24 23:17:32.944697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-24 23:17:32.944989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.945016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.945432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.945458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.945854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.945882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.946368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.946395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.946767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.946796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.947143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.947170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.947584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.947610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.948007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.948034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.948409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.948847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.948875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.949272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.949299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.949704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.949731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.950163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.950191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.950500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.950526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.951010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.951039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.951379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.951802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.951830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.952223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.952249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.952686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.952712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.953060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.953088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.953571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.953598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.953972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.953999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.954400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.954432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.954848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.954877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.955291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.955318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.955795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.955823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.956232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.956259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.956670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.956696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.956975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.957006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.957336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.957362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.957638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.957665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.957978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.958006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.958418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.958444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.958715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.958741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.959181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.959209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.959627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.959653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.960047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.960076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-24 23:17:32.960477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-24 23:17:32.960507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.960922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.960950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.961350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.961715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.961742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.962140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.962168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.962484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.962513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.962933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.962962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.963357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.963384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.963761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.963790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.964193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.964219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.964629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.964656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.965049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.965077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.965371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.965403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.965786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.965814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.966236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.966263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.966669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.966695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.967118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.967147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.967547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.967574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.967986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.968014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.968422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.968449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.968873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.968902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.969190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.969218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.969671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.969697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.970120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.970148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.970490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.970516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.970806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.970839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.971255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.971281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.971714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.971740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.972165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.972193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.972556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.972581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.973026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.973054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.973450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.973477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.973759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.973788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.974234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.974261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-24 23:17:32.974683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-24 23:17:32.974709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.975185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.975213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.975618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.975644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.975952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.975980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.976404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.976430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.976846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.976875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.977301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.977327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.977796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.977823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.978243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.978270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.978682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.978709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.979195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.979224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.979624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.979651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.980066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.980093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.980507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.980534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.980945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.980973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.981364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.981391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.981804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.981831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.982142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.982168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.982594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.982621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.983018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.983045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.983452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.983478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.983889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.983917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.984325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.984351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.984600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.984625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.985015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.985043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.985458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.985485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.985916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.985943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.986348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.986375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.986684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.986712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.987148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.987175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-24 23:17:32.987570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-24 23:17:32.987597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.987908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.987950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.988352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.988379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.988795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.988823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.989116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.989147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.989333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.989360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.989747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.989784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.990213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.990239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.990648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.990674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.991111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.991139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.991559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.991585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.991999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.992027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.992317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.992346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.992781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.992809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.993292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.993318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.993770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.993798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.994214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.994240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.994645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.994672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.995077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.995105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.995519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.995545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.996028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.996056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.996491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.996517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.996932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.996960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.997378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.997405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.997697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.997727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.997970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.997998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.998416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.998443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.998853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.998880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.999275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.999302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:32.999716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:32.999742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:33.000027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:33.000058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:33.000461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:33.000487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:33.000841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:33.000869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-24 23:17:33.001267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-24 23:17:33.001294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.001704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.001731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.002161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.002189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.002662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.002688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.003084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.003112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.003478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.003506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.003907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.003935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.004361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.004387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.004815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.004850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.005256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.005283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.005703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.005729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.006140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.006563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.006589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.007001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.007028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.007330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.007357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.007794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.007823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.008220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.008246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.008553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.008582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.008999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.009028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.009450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.009477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.009876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.009903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.010178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.010204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.010627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.010655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.011105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.011133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.011527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.011554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.011879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.011907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.012325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.012352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.012794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.013205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.013232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.013631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.013658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.014138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.014166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.014567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.014593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.015011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.015038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.015323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.015352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.015732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.015767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.016213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.016241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-24 23:17:33.016646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-24 23:17:33.016672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.016941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.016968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.017428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.017454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.017880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.017908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.018196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.018225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.018613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.018640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.019048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.019075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.019472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.019498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.019921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.019949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.020232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.020261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.020696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.020722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.021135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.021163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.021589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.021616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.021920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.021949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.022350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.022377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.022803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.022832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.023254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.023281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.023696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.023723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.024128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.024157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.024582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.024608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.024997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.025025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.025442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.025469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.025827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.025856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.026142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.026168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.026590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.026617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.026918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.026947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.027248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.027278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.027723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.027750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.028184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.028211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.028504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.028531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.028874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.028901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.029334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.029360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-24 23:17:33.029788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-24 23:17:33.029817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.030236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.030262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.030680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.030707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.031138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.031166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.031571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.031597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.031998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.032025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.032444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.032871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.032904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.033223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.033255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.033724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.033761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.034070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.034100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.034521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.034548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.034947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.034976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.035374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.035401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.035733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.035767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.036186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.036213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.036608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.036634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.036947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.036975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.037364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.037390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.037789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.037816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.038250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.038276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.038588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.039021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.039050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.039447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.039473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.039876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.039904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.040300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.040327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.040739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.040776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.041197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.041224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.041709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.041735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.042188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.042216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-24 23:17:33.042631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-24 23:17:33.042658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.042964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.042995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.043393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.043422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.043849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.043877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.044320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.044347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.044773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.044800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.045246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.045275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.045688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.045715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.046121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.046149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.046562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.046588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.046892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.046920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.047346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.047373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.047798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.047826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.048136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.048163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.048562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.048589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.048905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.048936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.049345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.552 [2024-07-24 23:17:33.049372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.552 qpair failed and we were unable to recover it. 00:29:15.552 [2024-07-24 23:17:33.049772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.049807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.050228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.050254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.050669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.050696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.051008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.051040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.051343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.051368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.051791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.051819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.052236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.052263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.052684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.053134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.053162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.053433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.053471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.053843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.053871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.054309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.054335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.054735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.054770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.055250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.055277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.055688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.055715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.056110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.056139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.056563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.056590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.057021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.057050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.057422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.057449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.057843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.057870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.058269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.058296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.058585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.058616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.059048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.059076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.059535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.059563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.059996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.060023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.060422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.060449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.060863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.060891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.061313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.061339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.061632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.061660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.062006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.062033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.062330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.062361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.062802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.062830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.063228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.063255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.063653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.063680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.063988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.064016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.064308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.064337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.064739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.553 [2024-07-24 23:17:33.064776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.553 qpair failed and we were unable to recover it. 00:29:15.553 [2024-07-24 23:17:33.065159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.065186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.065606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.065632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.065979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.066006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.066369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.066402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.066816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.066845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.067267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.067294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.067576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.067605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.068023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.068050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.068447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.068474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.068900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.068927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.069347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.069374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.069679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.069706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.069909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.070353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.070381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.070792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.070821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.071220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.071246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.071701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.071727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.072127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.072155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.072525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.072552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.072991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.073019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.073426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.073452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.073858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.073885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.074198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.074223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.074655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.074680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.074979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.075007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.075322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.075347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.075787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.075815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.076250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.076277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.076640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.076667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.077071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.077098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.077534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.077560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.077964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.077992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.078391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.078418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.078830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.078859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.079163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.079192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.079555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.079582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.080014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.080043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-24 23:17:33.080500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.554 [2024-07-24 23:17:33.080527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.080939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.080968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.081413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.081441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.081912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.081941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.082345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.082373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.082787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.082816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.083258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.083291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.083720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.083747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.084038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.084066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.084488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.084514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.084937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.084965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.085349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.085376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.085790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.085819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.086236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.086262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.086698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.086725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.087148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.087176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.087561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.087588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.088044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.088073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.088478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.088504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.088903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.088931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.089306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.089333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.089789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.089816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.090221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.090247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.090682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.090708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.090917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.090948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.091357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.091384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.091812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.091840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.092261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.092288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.092705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.092732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.093114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.093141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.093556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.093582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.093985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.094014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.094438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.094465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.094881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.094910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.095277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.095303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.095730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.095773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.096194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.096221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.096639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.555 [2024-07-24 23:17:33.096667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-24 23:17:33.097007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.097036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.097467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.097494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.097948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.097976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.098327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.098353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.098789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.098817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.099226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.099253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.099714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.099741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.100189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.100216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.100625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.100658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.101053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.101082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.101384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.101410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.101835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.101863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.102153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.102182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.102618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.102645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.103049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.103483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.103511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.103948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.103977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.104394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.104421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.104854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.104883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.105343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.105370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.105789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.105819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.106104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.106132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.106562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.106590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.106903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.106935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.107372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.107400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.107806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.107835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.108202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.108229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.108648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.108676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.108998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.109027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.109431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.109461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.109864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.109892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.110184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.110210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.110659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.110685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.111135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.111560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.111587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.112001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.112030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.112438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.112465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-24 23:17:33.112874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.556 [2024-07-24 23:17:33.112902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.113327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.113354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.113845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.113873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.114320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.114347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.114818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.114846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.115167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.115197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.115637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.115663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.116062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.116090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.116502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.116528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.116933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.116961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.117387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.117413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.117785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.117819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.118230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.118257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.118686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.118713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.119135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.119163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.119578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.119605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.120011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.120039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.120443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.120469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.120893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.120922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.121355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.121382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.121793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.121820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.122169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.122195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.122614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.122640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.123093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.123121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.123524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.123550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.123953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.123981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.124401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.124428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.124873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.124901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.125303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.125330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.125772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.125801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.126228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.557 [2024-07-24 23:17:33.126255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.557 qpair failed and we were unable to recover it. 00:29:15.557 [2024-07-24 23:17:33.126699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.127153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.127601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.127628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.128045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.128073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.128551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.128578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.128981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.129009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.129359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.129386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.129810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.129840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.130285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.130311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.130711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.130738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.131164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.131192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.131577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.131604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.132041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.132069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.132471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.132498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.132737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.133192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.133219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.133618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.133644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.134025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.134054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.134480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.134507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.134930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.134957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.135414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.135447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.135854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.135881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.136307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.136333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.136750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.136784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.137232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.137259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.137692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.137719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.138197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.138639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.138665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.139026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.139054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.139343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.139669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.139698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.140174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.140203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.140610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.140636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.141050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.141078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.141494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.141522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.141944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.141971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.142380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.142407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.558 [2024-07-24 23:17:33.142814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.558 [2024-07-24 23:17:33.142842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.558 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.143273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.143299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.143763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.143791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.144190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.144217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.144644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.144671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.145015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.145367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.145393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.145703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.146172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.146201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.146626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.146652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.147076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.147106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.147544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.147571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.147977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.148003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.148430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.148456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.148883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.148911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.149345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.149578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.149607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.150068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.150096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.150410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.150436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.150866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.150894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.151307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.151333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.151764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.151792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.152198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.152633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.152665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.153069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.153097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.153524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.153551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.153970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.153998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.154411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.154438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.154848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.154876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.155201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.155227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.155654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.155680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.156053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.156082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.156504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.156530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.156937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.156964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.157367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.157394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.157874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.158293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.158319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.158768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.158796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.559 qpair failed and we were unable to recover it. 00:29:15.559 [2024-07-24 23:17:33.159227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.559 [2024-07-24 23:17:33.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.159635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.159661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.160138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.160166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.160590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.160617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.161060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.161087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.161493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.161519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.161927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.161956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.162379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.162406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.162833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.162862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.163292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.163318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.163692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.164033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.164063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.164481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.164509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.164913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.164941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.165199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.165227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.165545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.165574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.165985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.166013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.166423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.166450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.166886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.166914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.167339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.167365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.167792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.167820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.168112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.168140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.168555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.168582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.169069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.169461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.169487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.169925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.169960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.170389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.170417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.170816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.170844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.171272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.171298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.171716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.171743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.172182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.172209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.172642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.172669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.173040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.173068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.173493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.173520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.173939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.173966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.174393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.174419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.174845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.174873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.175282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.560 [2024-07-24 23:17:33.175308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.560 qpair failed and we were unable to recover it. 00:29:15.560 [2024-07-24 23:17:33.175729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.175763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.176199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.176225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.176648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.176675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.177079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.177540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.177567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.177990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.178018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.178458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.178484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.178891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.178920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.179225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.179251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.179676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.179703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.180117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.180144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.180456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.180482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.180926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.180955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.181381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.181407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.181835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.181863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.182247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.182274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.182625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.182651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.183070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.183098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.183406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.183438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.183887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.183914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.184316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.184772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.184800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.185234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.185261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.185700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.185727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.186179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.186207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.186526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.186551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.186948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.186976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.187374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.187407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.187901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.187928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.188359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.188385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.188823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.188849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.189150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.561 [2024-07-24 23:17:33.189177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.561 qpair failed and we were unable to recover it. 00:29:15.561 [2024-07-24 23:17:33.189626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.189653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.190059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.190087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.190521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.190548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.190963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.190991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.191291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.191321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.191761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.191788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.192203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.192229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.192675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.192701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.193193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.193221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.193651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.193678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.194137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.194165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.194593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.194619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.195031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.195060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.195488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.195515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.195956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.195984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.196398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.196423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.196838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.196866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.197294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.197320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.197762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.197790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.197981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.198012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.198419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.198447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.198874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.198902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.199332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.199359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.199774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.199801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.200212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.200238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.200663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.200690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.201128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.201155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.201570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.201597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.202049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.202508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.202535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.202975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.203005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.203435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.203462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.203944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.204384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.204410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.204849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.204876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.205284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.205315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.205815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.205843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.562 [2024-07-24 23:17:33.206289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.562 [2024-07-24 23:17:33.206316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.562 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.206750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.206787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.207228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.207256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.207693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.207720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.208034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.208066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.208475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.208502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.208805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.208833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.209246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.209273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.209745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.209792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.210235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.210261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.210672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.210699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.211138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.211166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.211597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.211625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.212065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.212093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.212408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.212435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.212831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.212859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.213175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.213204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.213556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.213582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.213982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.214010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.214453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.214480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.214909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.214937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.215369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.215395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.215692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.215721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.216173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.216202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.216524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.216551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.216884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.216917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.217343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.217369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.217790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.217818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.218245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.218272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.218708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.218734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.219155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.219182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.219588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.219615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.220058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.220088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.220493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.220520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.220942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.220971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.221395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.221424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.221818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.563 [2024-07-24 23:17:33.221846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.563 qpair failed and we were unable to recover it. 00:29:15.563 [2024-07-24 23:17:33.222286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.222312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.222795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.222851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.223284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.223311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.223733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.223771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.224194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.224223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.224634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.224662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.225068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.225097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.225406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.225436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.225777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.225810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.226139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.226167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.226570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.227059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.227089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.227512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.227542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.227964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.227993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.228419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.228446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.228885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.228915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.229323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.229350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.229779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.229807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.230219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.230248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.230666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.230693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.231116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.231144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.231555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.231582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.232025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.232053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.232469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.232496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.232881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.232910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.233344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.233371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.233811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.233839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.234257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.234284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.234730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.234767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.235187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.235214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.235634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.235661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.236157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.236184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.236488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.236517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.236942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.236970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.237381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.237408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.237838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.237866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.238302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.238330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.564 [2024-07-24 23:17:33.238742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.564 [2024-07-24 23:17:33.238779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.564 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.239219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.239246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.239686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.239712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.240229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.240256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.240681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.240708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.241188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.241219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.241516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.241544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.241859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.241891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.242310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.242337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.242778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.242808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.243227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.243253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.243666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.243693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.243972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.244004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.244450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.244477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.244914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.244941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.245384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.245411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.245847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.245876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.246351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.246378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.246772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.246801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.247299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.247326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.247767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.247796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.248245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.248273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.248693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.248719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.249183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.249212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.249617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.249644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.250077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.250105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.250523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.250550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.250974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.251002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.251413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.251441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.251851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.251879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.252172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.252201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.252626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.252661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.252890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.252918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.253215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.253245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.253549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.253578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.254017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.254046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.254417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.254444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.254875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.254903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.565 qpair failed and we were unable to recover it. 00:29:15.565 [2024-07-24 23:17:33.255358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.565 [2024-07-24 23:17:33.255386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.255824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.255853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.256320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.256347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.256707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.256733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.257166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.257196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.257655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.257684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.258109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.258139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.258549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.258580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.259004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.259032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.259399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.259425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.259828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.259856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.260170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.260198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.260633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.260660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.261074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.261102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.261516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.261543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.261950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.261979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.262380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.262409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.262818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.262846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.263266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.263295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.263708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.263736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.264248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.264276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.264706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.264734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.265183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.265211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.265621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.265648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.266047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.266077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.266511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.266538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.266979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.267008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.267417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.267443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.267856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.267885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.268328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.268355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.268796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.268824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.269237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.269264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.269681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.566 [2024-07-24 23:17:33.269708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.566 qpair failed and we were unable to recover it. 00:29:15.566 [2024-07-24 23:17:33.270087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.270123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.270563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.270592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.271070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.271099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.271491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.271518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.271954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.271982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.272408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.272434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.272860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.272889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.273328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.273358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.273674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.273705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.274128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.274158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.274605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.274632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.275040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.275069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.275503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.275529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.275984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.276014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.276429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.276458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.276889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.276917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.277365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.277392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.277880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.277908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.278374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.278401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.278822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.278852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.279280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.279307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.279647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.279673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.280024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.280052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.280481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.280509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.280944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.280971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.281391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.281417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.281782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.281810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.282218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.282245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.282676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.282703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.283141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.283170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.283603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.283631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.284046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.284074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.284514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.284542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.284970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.284999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.285421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.285449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.285895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.285923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.286332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.286360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.567 [2024-07-24 23:17:33.286779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.567 [2024-07-24 23:17:33.286808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.567 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.287236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.287264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.287702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.288258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.288293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.288698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.288727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.289187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.289217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.289643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.289670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.289991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.290019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.290458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.290918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.290945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.291374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.291402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.291834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.291862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.292302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.292329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.292739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.292775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.293186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.293216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.293586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.293613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.294061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.294090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.294539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.294568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.294792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.294835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.295245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.295272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.295716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.295745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.296183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.296211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.296625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.296652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.297072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.297101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.297547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.297573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.298010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.298039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.298447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.298909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.298936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.299348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.299375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.299811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.299839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.300274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.300301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.300699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.300725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.301189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.301216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.301627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.301653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.302057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.302087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.302528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.302556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.302910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.302939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.303364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.568 [2024-07-24 23:17:33.303392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.568 qpair failed and we were unable to recover it. 00:29:15.568 [2024-07-24 23:17:33.303712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.303740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.304167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.304195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.304606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.304634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.305089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.305579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.305605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.306051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.306086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.306515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.306541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.306956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.306983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.307415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.307444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.307883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.307912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.308217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.308247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.308632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.308659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.309085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.309112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.309521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.309548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.309987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.310015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.310431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.310457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.310863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.311323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.311349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.311738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.311775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.312236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.312263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.312694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.312722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.313173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.313201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.313611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.313637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.314087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.314115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.314551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.314947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.314974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.315444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.315471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.315923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.315951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.316388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.316415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.316805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.316832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.317276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.317302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.317739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.317782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.318228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.318255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.318732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.318775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.319181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.319209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.319617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.319643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.320082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.569 [2024-07-24 23:17:33.320539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.569 [2024-07-24 23:17:33.320568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.569 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.321005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.321035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.321450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.321478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.321951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.321981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.322294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.322320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.322776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.322803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.323234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.323261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.323676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.323703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.324149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.324184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.324635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.324661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.325103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.325131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.325563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.325590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.326017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.326045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.326469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.326496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.326840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.326868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.327248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.327276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.327708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.327736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.328228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.328257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.328689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.328717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.329041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.329074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.329506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.329533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.329941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.329970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.330280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.330306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.330738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.330778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.331083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.331111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.331492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.331520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.570 [2024-07-24 23:17:33.331964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.570 [2024-07-24 23:17:33.331994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.570 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.332450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.332482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.332906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.332935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.333306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.333333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.333785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.333814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.334227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.334254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.334773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.334802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.335240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.335267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.335626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.335654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.335962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.335996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.336416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.336443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.336881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.336908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.337349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.337376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.337808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.338290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.338316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.338744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.338783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.339214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.339245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.339674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.339702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.340165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.340196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.340423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.340452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.340904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.340932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.341408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.341435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.341870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.341905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.342378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.342405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.342838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.342866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.343303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.343330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.343647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.343673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.344109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.344139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.344504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.344530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.344962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.344991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.345435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.345462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.345878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.345907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.346370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.346398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.346682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.346711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.347222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.347251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.347576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.347602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.348035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.348064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.348510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.348537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.348980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.349008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.835 [2024-07-24 23:17:33.349424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.835 [2024-07-24 23:17:33.349450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.835 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.349934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.349962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.350370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.350397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.350840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.350868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.351295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.351745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.351783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.352296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.352323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.352736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.352774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.353197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.353224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.353678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.353704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.354163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.354192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.354638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.354665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.355087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.355116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.355551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.355579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.356158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.356259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.356778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.356816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.357208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.357236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.357546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.357581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.358102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.358201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.358697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.358732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.359215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.359245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.359673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.359701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.360057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.360086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.360531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.360569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.361140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.361242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.361825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.361886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.362342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.362370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.362783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.362812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.363224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.363251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.363750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.363808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.364226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.364254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.364734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.364777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.365188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.365216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.365676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.365704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.366165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.366194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.366623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.366651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.367075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.836 [2024-07-24 23:17:33.367104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.836 qpair failed and we were unable to recover it. 00:29:15.836 [2024-07-24 23:17:33.367517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.367547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.367968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.367997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.368439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.368466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.368892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.368920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.369327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.369355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.369678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.370121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.370152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.370587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.370615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.371092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.371121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.371550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.371578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.372004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.372034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.372459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.372488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.372951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.372980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.373486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.373515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.373949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.373978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.374291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.374318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.374629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.374663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.375072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.375102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.375520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.375547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.375966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.375994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.376417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.376446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.376871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.376899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.377317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.377344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.377784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.377812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.378230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.378258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.378561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.378587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.378950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.378985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.379387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.379415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.379863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.379892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.380389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.380416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.380844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.380873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.381286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.381315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.381683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.381710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.382184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.382213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.382681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.382708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.383047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.383075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.383511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.383540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.837 qpair failed and we were unable to recover it. 00:29:15.837 [2024-07-24 23:17:33.383955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.837 [2024-07-24 23:17:33.383984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.384308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.384340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.384776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.384805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.385149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.385176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.385622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.385650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.385963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.385991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.386416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.386443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.386864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.386892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.387286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.387313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.387738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.387802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.388264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.388292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.388797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.388829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.389302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.389329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.389770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.389797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.390137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.390165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.390667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.390695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.391014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.391043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.391494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.391522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.391842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.391879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.392316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.392768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.392798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.393235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.393261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.393668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.393695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.394176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.394205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.394644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.394673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.395101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.395131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.395618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.395645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.396006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.396035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.396353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.396381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.396820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.396856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.397215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.397247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.397685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.397712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.398040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.398069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.398480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.398507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.398827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.398857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.399250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.399278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.399697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.399725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.400209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.838 [2024-07-24 23:17:33.400238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-07-24 23:17:33.400679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.400706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.401029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.401061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.401508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.401536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.401972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.402000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.402442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.402469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.402785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.402814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.403291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.403318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.403694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.403722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.404184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.404213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.404655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.404681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.405201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.405229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.405552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.405579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.406006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.406034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.406421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.406450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.406773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.406808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.407254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.407281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.407725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.407765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.408205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.408233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.408691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.408719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.409200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.409229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.409664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.409691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.409967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.409995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.410434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.410460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.411039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.411139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.411642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.411677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.412141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.412171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.412381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.412412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.412858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.412889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.413383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.413411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.413733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.413783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.414238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.839 [2024-07-24 23:17:33.414266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-07-24 23:17:33.414710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.414748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.415182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.415210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.415601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.415630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.416054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.416084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.416519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.416546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.416891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.416920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.417414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.417443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.417869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.417899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.418368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.418395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.418702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.419192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.419222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.419603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.420011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.420041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.420460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.420487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.420919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.420948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.421388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.421416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.421853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.421881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.422327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.422355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.422766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.422794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.423227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.423255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.423575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.423605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.424023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.424052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.424468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.424495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.424918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.424948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.425366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.425394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.425834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.425862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.426276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.426304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.426602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.426629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.426921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.426950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.427379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.427406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.427822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.427851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.428178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.428206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.428522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.428553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.428992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.429022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.429415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.429443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.429889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.429917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.430333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.430359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.430668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.840 [2024-07-24 23:17:33.430697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.840 qpair failed and we were unable to recover it. 00:29:15.840 [2024-07-24 23:17:33.431108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.431138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.431504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.431530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.431947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.431983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.432409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.432436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.432748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.432807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.433165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.433194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.433640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.433668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.434105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.434135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.434605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.434633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.435049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.435077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.435512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.435539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.435976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.436005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.436439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.436467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.436884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.436912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.437332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.437358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.437677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.437704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.438162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.438190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.438609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.438636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.439133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.439162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.439303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.439327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.439729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.439770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.440205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.440233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.440660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.440690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.441103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.441133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.441575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.441601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.441830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.441857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.442237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.442264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.442707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.442733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.443168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.443196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.443641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.443669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.444077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.444106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.444302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.444333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.444831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.444860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.445152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.445180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.445623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.445651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.446114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.446141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.446426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.841 [2024-07-24 23:17:33.446452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.841 qpair failed and we were unable to recover it. 00:29:15.841 [2024-07-24 23:17:33.446896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.446924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.447376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.447403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.447720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.447761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.448081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.448108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.448524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.448550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.448968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.449004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.449431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.449458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.449785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.449813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.450274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.450301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.450743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.450780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.451199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.451226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.451628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.451655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.452076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.452104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.452422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.452451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.452874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.452903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.453318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.453345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.453775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.453803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.454205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.454232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.454697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.454724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.455169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.455197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.455602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.455629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.456052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.456080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.456505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.456533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.456958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.456988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.457422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.457449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.457839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.457867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.458307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.458334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.458768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.458797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.459204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.459232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.459665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.459691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.460175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.460203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.460619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.460647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.460975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.461007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.461431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.461459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.461894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.461922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.462363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.462390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.462792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.462821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-07-24 23:17:33.463235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.842 [2024-07-24 23:17:33.463263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.463696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.463724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.464141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.464169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.464585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.464612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.465033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.465061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.465500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.465528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.465958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.465985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.466392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.466418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.466844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.466873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.467288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.467316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.467738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.467788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.468106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.468138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.468580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.468608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.468921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.468950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.469393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.469420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.469843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.469871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.470238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.470265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.470691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.470718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.471021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.471054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.471380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.471407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.471878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.471906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.472325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.472352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.472774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.472804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.473283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.473311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.473748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.473789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.474136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.474163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.474617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.474644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.474920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.474948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.475367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.475394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.475829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.475858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.476032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.476058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.476596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.476622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.477055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.477084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.477569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.477596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.478014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.478041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.478463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.478498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.478900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.478929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.479383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.479410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-07-24 23:17:33.479811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.843 [2024-07-24 23:17:33.479839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.480275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.480301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.480685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.480712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.481156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.481183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.481604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.481631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.482015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.482044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.482457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.482484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.482937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.482964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.483385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.483411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.483717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.483746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.484202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.484230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.484489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.484516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.484954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.484983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.485287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.485318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.485769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.485798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.486117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.486145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.486581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.486607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.487051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.487079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.487486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.487512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.487835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.487864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.488302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.488330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.488744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.488803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.489263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.489290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.489725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.489762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.490207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.490234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.490647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.490676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.491138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.491167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.491601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.491628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.491892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.491920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.492199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.492226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.492480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.492507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.492929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.844 [2024-07-24 23:17:33.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-07-24 23:17:33.493277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.493309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.493762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.493790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.494233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.494259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.494704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.494732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.495212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.495241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.495677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.495710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.496121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.496150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.496463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.496492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.496933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.496963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.497374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.497402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.497841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.497869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.498299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.498327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.498770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.498797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.499266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.499292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.499703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.499730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.500183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.500210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.500633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.500660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.501103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.501133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.501503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.501532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.501861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.501890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.502342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.502370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.502776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.502805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.503226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.503254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.503635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.503663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.504076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.504104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.504501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.504528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.504930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.504960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.505435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.505461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.505876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.505905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.506250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.506276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.506735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.506773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.507205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.507233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.507662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.507689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.508132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.508160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.508593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.508622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.509038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.509067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.509274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.509300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.845 [2024-07-24 23:17:33.509737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.845 [2024-07-24 23:17:33.509774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.845 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.510182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.510209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.510642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.510671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.511122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.511151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.511574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.511601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.512025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.512054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.512495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.512522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.512952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.512980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.513385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.513425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.513844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.513872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.514144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.514170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.514606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.514633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.515129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.515158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.515473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.515504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.515955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.515984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.516397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.516424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.516837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.516865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.517186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.517631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.517659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.518111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.518139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.518458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.518488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.518917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.518945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.519389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.519417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.519841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.519870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.520301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.520328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.520778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.520807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.521230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.521256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.521687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.521714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.522167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.522197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.522581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.522608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.523050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.523079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.523512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.523539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.523958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.523986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.524420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.524448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.524882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.524910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.525354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.525381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.525793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.525821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.526268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.846 [2024-07-24 23:17:33.526296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.846 qpair failed and we were unable to recover it. 00:29:15.846 [2024-07-24 23:17:33.526668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.526696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.527025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.527055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.527502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.527529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.527959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.527988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.528394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.528421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.528835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.528864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.529275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.529302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.529732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.529778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.530200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.530227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.530651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.530679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.530980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.531019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.531451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.531479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.531749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.531790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.532228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.532255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.532674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.532701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.533190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.533218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.533654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.533681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.534093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.534121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.534403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.534429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.534831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.534881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.535329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.535356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.535842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.535871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.536123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.536151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.536322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.536351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.536726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.536764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.537190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.537217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.537665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.537693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.538014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.538044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.538492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.538519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.538936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.538963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.539465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.539491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.539770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.539799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.540220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.540247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.540671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.540698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.541146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.541174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.541607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.541633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.541936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.847 [2024-07-24 23:17:33.541964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.847 qpair failed and we were unable to recover it. 00:29:15.847 [2024-07-24 23:17:33.542335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.542363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.542789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.542816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.543236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.543263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.543696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.543724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.544163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.544190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.544585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.544613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.545051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.545080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.545394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.545422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.545860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.545888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.546311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.546337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.546811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.546840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.547156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.547184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.547427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.547456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.547888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.547923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.548439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.548466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.548878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.548907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.549320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.549348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.549773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.549802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.550095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.550122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.550542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.550569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.551024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.551053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.551366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.551398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.551799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.551827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.552140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.552168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.552600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.552627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.553088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.553116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.553548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.553576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.553959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.553995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.554410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.554437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.554773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.554804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.555242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.555269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.555629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.555657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.848 [2024-07-24 23:17:33.556085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.848 [2024-07-24 23:17:33.556114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.848 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.556404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.556434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.556895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.556924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.557338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.557366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.557686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.557718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.558118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.558146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.558591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.558618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.559049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.559280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.559307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.559610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.559637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.560048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.560077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.560508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.560535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.560841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.560868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.561293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.561321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.561739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.561786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.562243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.562269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.562685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.562712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.563202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.563230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.563552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.563579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.563804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.563833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.564261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.564287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.564706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.564740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.565183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.565211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.565643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.565670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.565965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.565993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.566434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.566463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.566882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.566910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.567357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.567383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.567806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.567834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.568242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.568269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.568745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.568784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.569201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.569228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.569660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.569687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.569979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.570009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.570412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.570439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.570815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.570843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.571268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.571296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.571701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.849 [2024-07-24 23:17:33.571727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.849 qpair failed and we were unable to recover it. 00:29:15.849 [2024-07-24 23:17:33.572162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.572190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.572536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.572563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.572876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.572908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.573236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.573266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.573696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.573723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.574147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.574175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.574587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.574613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.575034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.575062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.575496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.575524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.575929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.575957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.576261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.576289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.576724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.576979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.577009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.577440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.577468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.577915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.577944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.578434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.578463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.578747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.578786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.579195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.579221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.579672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.580090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.580118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.580546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.580573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.581013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.581041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.581491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.581518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.581820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.581855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.582280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.582307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.582735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.582775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.583120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.583147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.583561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.583587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.583988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.584015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.584430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.584457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.584926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.584953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.585276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.585306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.585680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.585707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.586100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.586128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.586436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.586466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.586934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.586962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.587436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.587464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.850 [2024-07-24 23:17:33.587908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.850 [2024-07-24 23:17:33.587937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.850 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.588372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.588678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.588704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.589151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.589179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.589495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.589523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.589961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.589989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.590406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.590433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.590863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.590892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.591341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.591368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.591784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.591814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.592135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.592165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.592639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.592666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.593088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.593115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.593545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.593574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.593897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.593926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.594366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.594392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.594826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.594854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.595345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.595372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.595694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.595721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.596156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.596186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.596551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.596579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.596997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.597025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.597342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.597370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.597768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.597796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.598241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.598267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.598684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.599123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.599162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.599566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.599594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.600001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.600029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.600441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.600467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.600883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.600911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.601227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.601252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.601696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.601722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.601990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.602017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.602315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.602344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.602786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.602815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.603179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.603206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.603641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.603668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.851 qpair failed and we were unable to recover it. 00:29:15.851 [2024-07-24 23:17:33.603985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.851 [2024-07-24 23:17:33.604013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.604417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.604444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.604877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.604905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.605331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.605359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.605792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.605820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.606261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.606288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.606723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.606750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.607170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.607197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.607634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.607662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.607958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.607989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.608274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.608302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.608740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.608778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.609255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.609282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.609681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.609709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.610127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.610155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.610563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.610590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.610998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.611026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.611294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.611320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.611738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.611779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.612217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.612244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.612685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.612712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.613012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.613040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.613475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.613502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.613982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.614010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.614450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.614478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.614913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.614940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.615362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.615389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.615811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.615840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:15.852 [2024-07-24 23:17:33.616282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.852 [2024-07-24 23:17:33.616315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:15.852 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.616638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.616670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.617089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.617118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.617531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.617558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.617974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.618002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.618385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.618412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.618803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.618831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.619226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.619252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.619597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.619632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.620047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.620075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.620491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.620517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.620799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.620831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.621120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.621147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.621480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.621507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.621938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.621967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.622379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.622406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.622686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.622711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.623154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.623182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.623619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.623645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.624097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.624126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.624564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.624591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.625006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.625034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.625419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.625446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.625689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.625716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.626137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.626166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.626446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.626472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.626891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.626919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.627353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.627570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.627600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.628051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.628079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.122 qpair failed and we were unable to recover it. 00:29:16.122 [2024-07-24 23:17:33.628511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.122 [2024-07-24 23:17:33.628538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.628960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.628987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.629400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.629426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.629796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.629825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.630244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.630271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.630702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.630728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.631178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.631206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.631641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.631667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.632070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.632099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.632514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.632542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.633037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.633072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.633477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.633503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.633926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.633953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.634397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.634425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.634748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.634789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.635141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.635601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.635627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.636101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.636130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.636581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.636609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.637027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.637055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.637359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.637391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.637844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.637871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.638322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.638348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.638661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.638690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.639122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.639151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.639564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.639591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.640009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.640037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.640480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.640506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.640935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.640963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.641380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.641408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.641783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.641810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.642230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.642257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.642698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.642725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.643050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.643078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.643492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.643521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.643958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.643986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.644295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.644323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.123 [2024-07-24 23:17:33.644692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.123 qpair failed and we were unable to recover it. 00:29:16.123 [2024-07-24 23:17:33.645126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.645154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.645586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.645614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.646053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.646082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.646521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.646549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.647000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.647028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.647456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.647485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.647902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.647931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.648214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.648241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.648640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.648667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.649079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.649107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.649541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.649569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.649973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.650002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.650361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.650395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.650816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.650845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.651281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.651589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.651616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.652061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.652090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.652518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.652547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.652862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.652890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.653210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.653237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.653703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.653730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.654048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.654076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.654388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.654419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.654883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.654912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.655387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.655415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.655816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.655846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.656298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.656326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.656654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.656686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.657094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.657123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.657462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.657490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.657902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.657930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.658380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.658407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.658856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.658884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.659316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.659346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.659778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.659806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.660259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.660287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.660727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.660775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.124 [2024-07-24 23:17:33.661170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.124 [2024-07-24 23:17:33.661197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.124 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.661634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.661661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.662054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.662083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.662392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.662419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.662857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.662884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.663331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.663358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.663693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.663721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.664197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.664226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.664662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.664689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.665121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.665150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.665567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.665593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.666044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.666073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.666521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.666548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.666991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.667018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.667424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.667451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.667839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.667872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.668317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.668346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.668797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.668826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.669149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.669177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.669626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.669653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.670066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.670094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.670479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.670506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.670975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.671002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.671425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.671453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.671890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.672351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.672379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.672888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.672918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.673225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.673251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.673685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.674179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.674207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.674629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.674658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.675128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.675156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.675589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.675616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.675942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.675971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.676418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.676444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.676872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.676901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.677296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.677323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.677627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.125 [2024-07-24 23:17:33.677655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.125 qpair failed and we were unable to recover it. 00:29:16.125 [2024-07-24 23:17:33.678096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.678124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.678538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.678564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.678985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.679012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.679452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.679933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.679962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.680447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.680473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.680909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.680937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.681369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.681396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.681802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.681831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.682162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.682190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.682594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.682620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.682903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.682930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.683393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.683420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.683867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.683896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.684321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.684347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.684790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.685257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.685283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.685701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.685727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.686056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.686084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.686512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.686541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.686826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.686854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.687284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.687311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.687748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.687788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.688219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.688246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.688667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.688695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.689169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.689197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.689665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.689692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.690174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.690203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.690670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.690699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.691041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.691072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.126 [2024-07-24 23:17:33.691499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.126 [2024-07-24 23:17:33.691525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.126 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.691949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.691978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.692414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.692441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.692905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.692934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.693376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.693403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.693845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.693873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.694367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.694393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.694813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.694841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.695274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.695301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.695725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.695766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.696079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.696106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.696377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.696402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.696877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.696906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.697356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.697383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.697837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.697872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.698224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.698252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.698552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.698578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.699007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.699460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.699488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.699919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.699947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.700379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.700406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.700682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.700708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.701195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.701222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.701640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.701666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.702078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.702106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.702419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.702449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.702887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.702917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.703333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.703360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.703795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.703823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.704261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.704287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.704552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.704578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.704924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.704952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.705399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.705425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.705906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.705934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.706331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.706358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.706728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.706766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.707194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.707221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.127 [2024-07-24 23:17:33.707661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.127 [2024-07-24 23:17:33.707689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.127 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.708131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.708159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.708480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.708506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.708948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.708975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.709358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.709384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.709710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.709740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.710202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.710232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.710664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.710691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.711088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.711117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.711433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.711460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.711882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.711910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.712315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.712342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.712632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.712659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.713098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.713126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.713430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.713459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.713898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.713927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.714371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.714398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.714781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.714817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.715278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.715306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.715744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.715781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.716114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.716144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.716439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.716467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.716760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.716788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.717228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.717254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.717661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.717688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.718201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.718230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.718518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.718546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.718949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.718978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.719375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.719401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.719832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.719859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.720273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.720300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.720719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.720747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.721079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.721109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.721545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.721573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.721991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.722018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.722296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.722324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.722725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.722772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.723184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.723213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.128 qpair failed and we were unable to recover it. 00:29:16.128 [2024-07-24 23:17:33.723633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.128 [2024-07-24 23:17:33.723663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.724090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.724120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.724498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.724527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.724952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.724981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.725400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.725427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.725868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.725896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.726230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.726261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.726703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.726731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.727158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.727186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.727625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.727652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.728071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.728099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.728458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.728485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.728923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.728951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.729285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.729319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.729766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.729794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.730270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.730298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.730586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.730615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.731057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.731086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.731404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.731434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.731871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.731906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.732320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.732347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.732786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.732814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.733239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.733266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.733718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.733745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.734178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.734206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.734496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.734525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.734970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.734998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.735450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.735477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.735788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.735820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.736204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.736230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.736653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.736680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.737121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.737149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.737657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.737684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.738121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.738149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.738447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.738476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.738914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.738942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.739384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.739412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.739854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.129 [2024-07-24 23:17:33.739885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.129 qpair failed and we were unable to recover it. 00:29:16.129 [2024-07-24 23:17:33.740315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.740342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.740786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.740815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.741223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.741250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.741682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.741709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.742130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.742159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.742444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.742471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.742874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.742921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.743363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.743392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.743832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.743861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.744303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.744828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.744856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.745311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.745338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.745846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.745875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.746135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.746567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.746594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.747053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.747080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.747468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.747495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.747789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.747817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.748278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.748306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.748776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.748805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.749131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.749159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.749688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.749721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.750204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.750233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.750665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.750691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.751110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.751138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.751404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.751432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.751908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.751938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.752357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.752387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.752639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.752666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.753214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.753245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.753651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.753679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.754108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.754137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.754573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.754601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.754906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.754937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.755320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.755348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.755857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.755886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.756301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.756328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.130 [2024-07-24 23:17:33.756749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.130 [2024-07-24 23:17:33.756790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.130 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.757267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.757294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.757731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.757777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.758236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.758264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.758683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.758711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.759039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.759068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.759520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.759546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.759985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.760014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.760467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.760496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.760927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.760955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.761396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.761425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.761717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.761745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.762155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.762183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.762697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.763181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.763210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.763653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.763682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.764123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.764152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.764604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.764631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.765042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.765071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.765510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.765538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.765957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.765986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.766401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.766427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.766869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.766897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.767260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.767286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.767709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.767744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.768189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.768216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.768651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.768678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.769081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.769109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.769615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.769642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.770060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.770088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.770558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.770587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.770887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.770916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.771370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.131 [2024-07-24 23:17:33.771397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.131 qpair failed and we were unable to recover it. 00:29:16.131 [2024-07-24 23:17:33.771815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.771844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.772274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.772301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.772789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.772820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.773307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.773336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.773649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.773679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.774143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.774172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.774496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.774973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.775002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.775265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.775291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.775708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.775735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.776194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.776223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.776615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.776642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.777001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.777033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.777341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.777367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.777811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.777839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.778277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.778304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.778717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.778745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.779073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.779108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.779565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.779593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.780011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.780040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.780480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.780507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.780824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.780852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.781320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.781346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.781650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.781677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.782082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.782109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.782549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.782575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.783007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.783035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.783422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.783451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.783890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.783917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.784369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.784397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.784839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.784868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.785320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.785352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.785747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.785794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.786212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.786240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.786678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.786706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.787134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.787163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.787489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.787519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.132 [2024-07-24 23:17:33.787866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.132 [2024-07-24 23:17:33.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.132 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.788178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.788206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.788652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.788680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.789039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.789069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.789497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.789524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.789958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.789986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.790402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.790429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.790929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.791142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.791169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.791609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.791636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.791957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.791989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.792425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.792452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.792896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.792924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.793367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.793398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.793802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.793831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.794159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.794186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.794603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.794630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.795059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.795086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.795504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.795532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.795942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.795970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.796353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.796381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.796700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.796728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.797158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.797186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.797623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.797649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.798003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.798030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.798436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.798465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.798881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.799335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.799362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.799790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.799817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.800184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.800211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.800649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.800678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.801078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.801105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.801525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.801552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.801811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.801839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.802294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.802327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.802648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.802680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.803089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.803118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.133 qpair failed and we were unable to recover it. 00:29:16.133 [2024-07-24 23:17:33.803550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.133 [2024-07-24 23:17:33.803576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.803988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.804016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.804432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.804460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.804723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.804762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.805060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.805090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.805548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.805575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.805931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.805959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.806395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.806423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.806859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.806887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.807326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.807354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.807774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.807802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.808256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.808283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.808723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.808761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.809183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.809211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.809657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.810140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.810170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.810517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.810544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.811035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.811133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.811655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.811692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.812148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.812179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.812597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.812624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.813056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.813085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.813500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.813527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.813851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.813888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.814341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.814370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.814793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.814821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.815252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.815278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.815716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.815742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.816188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.816215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.816703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.816731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.816985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.817012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.817457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.817485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.817903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.817932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.818366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.818393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.818807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.818837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.819271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.819298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.819781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.820236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.134 [2024-07-24 23:17:33.820269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.134 qpair failed and we were unable to recover it. 00:29:16.134 [2024-07-24 23:17:33.820684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.820712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.821029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.821059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.821484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.821511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.821934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.821962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.822382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.822409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.822843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.822873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.823323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.823350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.823780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.823808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.824231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.824259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.824550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.824580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.825008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.825036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.825451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.825478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.825902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.825930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.826354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.826381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.826804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.826833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.827272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.827298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.827743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.827784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.828207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.828235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.828665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.828692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.829032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.829063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.829495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.829523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.829952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.829982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.830300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.830328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.830793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.830822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.831229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.831256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.831665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.831693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.831972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.832001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.832483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.832510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.832834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.832862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.833305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.833333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.833843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.833872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.834268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.834740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.834778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.835253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.835280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.835722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.835749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.836152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.836179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.836500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.836531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.836959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.135 [2024-07-24 23:17:33.836989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.135 qpair failed and we were unable to recover it. 00:29:16.135 [2024-07-24 23:17:33.837494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.837521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.837972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.838008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.838480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.838508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.838927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.838957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.839276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.839303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.839629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.839655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.840107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.840135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.840564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.840591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.841028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.841057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.841473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.841500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.841915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.841944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.842243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.842274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.842550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.842577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.843001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.843029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.843472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.843919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.843947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.844387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.844414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.844748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.844786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.845258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.845284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.845719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.845747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.846078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.846104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.846551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.846577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.847057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.847086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.847502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.847530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.848074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.848174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.848703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.848738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.849177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.849207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.849636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.849664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.850094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.850125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.850571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.850598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.851012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.851042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.851348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.851376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.136 [2024-07-24 23:17:33.851810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.136 [2024-07-24 23:17:33.851839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.136 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.852258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.852286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.852701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.852728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.853046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.853075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.853552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.853579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.853933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.853961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.854362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.854390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.854814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.854844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.855281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.855308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.855726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.855770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.856249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.856276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.856764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.856793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.857160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.857187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.857623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.857650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.858146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.858174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.858496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.858525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.858958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.858987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.859403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.859430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.859851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.859879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.860294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.860322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.860763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.860792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.861241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.861273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.861590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.861617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.861957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.861987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.862401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.862427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.862866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.862896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.863204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.863232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.863659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.863688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.864107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.864136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.864565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.865011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.865039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.865396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.865422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.865858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.865888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.866336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.866363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.866785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.867128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.867161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.867589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.867619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.868056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.137 [2024-07-24 23:17:33.868084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.137 qpair failed and we were unable to recover it. 00:29:16.137 [2024-07-24 23:17:33.868510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.868536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.868821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.868852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.869276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.869303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.869677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.869703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.870163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.870191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.870562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.870589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.871006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.871035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.871331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.871360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.871807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.871836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.872274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.872300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.872724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.872762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.873179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.873213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.873613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.873641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.874055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.874084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.874395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.874421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.874864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.874894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.875341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.875369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.875798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.875825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.876270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.876297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.876740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.876778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.877224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.877252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.877688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.877715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.878038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.878073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.878467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.878495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.878944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.878973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.879407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.879436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.879871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.879901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.880310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.880337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.880843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.880871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.881289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.881317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.881778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.881808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.882267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.882294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.882611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.882642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.883055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.883084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.883520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.883548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.883961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.883989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.884308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.884334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.138 qpair failed and we were unable to recover it. 00:29:16.138 [2024-07-24 23:17:33.884768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.138 [2024-07-24 23:17:33.884796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.885199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.885228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.885658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.886061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.886516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.886543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.887025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.887054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.887505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.887531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.887843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.887875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.888312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.888341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.888787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.888821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.889247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.889274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.889692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.889719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.890143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.890171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.890493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.890524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.890946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.890982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.891395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.891423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.891711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.891737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.892178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.892206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.892581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.892609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.893115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.893143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.893545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.893572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.893989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.894017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.894493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.894520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.894895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.894924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.895340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.895367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.895803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.895832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.896264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.896291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.896734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.896800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.897231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.897259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.897693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.897720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.898057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.898091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.898526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.898553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.139 [2024-07-24 23:17:33.898865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.139 [2024-07-24 23:17:33.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.139 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.899359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-24 23:17:33.899390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.899812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-24 23:17:33.899842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.900281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-24 23:17:33.900308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.900722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-24 23:17:33.900749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.901072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-07-24 23:17:33.901100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-07-24 23:17:33.901522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.901550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.902001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.902030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.902453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.902479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.902909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.902939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.903386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.903413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.903830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.903858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.904303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.904331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.904595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.904621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.905051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.905079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.905437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.905464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.905849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.905877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.906332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.906358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.906787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.906815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.907135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.907164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.907593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.907620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.907939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.907967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.908382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.908415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.908843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.908872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.909260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.909287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.909599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.909626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.910094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.910121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.910523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.910550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.910961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.910990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.911425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.911452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.911913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.911941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.912380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.912407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.912793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.912820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.913223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.913250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.913698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.913726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.914157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.914185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.914612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.914639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.915053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.915082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.915493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.915521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.915964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.915993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.916289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.916318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.916621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.916652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.917118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.917147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-07-24 23:17:33.917563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-07-24 23:17:33.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.918010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.918037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.918472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.918501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.918946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.918975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.919393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.919420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.919864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.919893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.920325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.920358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.920743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.920783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.921083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.921110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.921534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.921560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.921978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.922006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.922295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.922322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.922726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.922767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.923207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.923234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.923668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.923695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.924125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.924154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.924567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.924594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.924915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.924942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.925376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.925405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.925822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.925850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.926252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.926280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.926597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.926625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.927076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.927105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.927359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.927386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.927781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.927810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.928253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.928281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.928560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.928587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.929083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.929112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.929402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.929431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.929736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.929777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.930187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.930215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.930522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.930551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.930869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.930896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.931355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.931382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.931661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.931687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.932120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.932582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.932609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-07-24 23:17:33.933050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-07-24 23:17:33.933078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.933519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.933546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.933994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.934021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.934509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.934538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.934849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.934880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.935361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.935389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.935717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.935743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.936194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.936222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.936671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.936699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.937121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.937156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.937625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.937651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.938103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.938131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.938571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.938599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.939059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.939087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.939513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.939539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.939960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.939990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.940490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.940517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.940937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.940967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.941278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.941305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.941720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.941747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.942187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.942214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.942573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.942600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.943009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.943039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.943479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.943508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.943835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.943875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.944275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.944303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.944714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.944740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.945115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.945142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.945575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.945604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.946023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.946051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.946466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.946493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.946905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.946934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.947357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.947816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.947845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.948300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.948328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.948791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.948818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.949256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.949283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.949716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.411 [2024-07-24 23:17:33.949744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.411 qpair failed and we were unable to recover it. 00:29:16.411 [2024-07-24 23:17:33.950260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.950288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.950541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.950567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.951035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.951063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.951465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.951492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.951906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.951934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.952347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.952375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.952809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.952838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.953276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.953304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.953586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.953614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.954049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.954077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.954505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.954532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.954917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.954953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.955459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.955486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.955928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.955956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.956371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.956398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.956830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.956860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.957319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.957347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.957792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.957820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.958141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.958168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.958604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.958631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.959089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.959119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.959536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.959563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.959968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.959995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.960428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.960455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.960904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.960932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.961373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.961401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.961835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.961863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.962272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.962299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.962741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.962782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.963218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.963246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.963677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.963705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.964064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.964093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.964535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.965032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.965061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.965466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.965493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.965929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.965957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.966398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.966425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.412 qpair failed and we were unable to recover it. 00:29:16.412 [2024-07-24 23:17:33.966841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.412 [2024-07-24 23:17:33.966869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.967289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.967644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.967674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.968123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.968153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.968525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.968552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.968996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.969024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.969408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.969434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.969866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.969894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.970336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.970364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.970805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.970834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.971198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.971225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.971674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.971702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.972163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.972191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.972623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.972652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.973116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.973151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.973474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.973500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.973952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.973980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.974414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.974441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.974772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.974803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.975129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.975156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.975599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.975625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.976055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.976084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.976407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.976434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.976836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.976865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.977333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.977360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.977838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.977865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.978131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.978157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.978593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.978619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.979030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.979058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.979501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.979529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.979962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.979990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.413 [2024-07-24 23:17:33.980416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.413 [2024-07-24 23:17:33.980442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.413 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.980861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.980891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.981321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.981348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.981706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.981734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.982159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.982188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.982603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.982630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.983051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.983079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.983522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.983550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.983967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.983996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.984484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.984511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.984829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.984859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.985295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.985323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.985774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.985803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.986220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.986247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.986661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.986689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.987191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.987219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.987669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.987696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.988015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.988045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.988477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.988505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.988784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.988812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.989268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.989294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.989604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.989635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.989951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.990412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.990444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.990883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.990912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.991229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.991257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.991672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.991699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.992108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.992136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.992545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.992572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.993021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.993050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.993488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.993514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.993934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.993962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.994376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.994402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.994888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.994917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.995446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.995473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.995855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.995884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.996178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.996205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.996618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.414 [2024-07-24 23:17:33.996645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.414 qpair failed and we were unable to recover it. 00:29:16.414 [2024-07-24 23:17:33.997054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.997083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.997502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.997530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.997725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.997789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.998273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.998301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.998581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.998608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.999038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.999067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.999483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.999510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:33.999950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:33.999978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.000342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.000370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.000773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.000801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.001245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.001272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.001695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.001723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.002175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.002203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.002489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.002516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.002770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.002799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.003225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.003251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.003686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.003714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.004172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.004201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.004512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.004540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.005054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.005082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.005309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.005335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.005778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.005806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.006249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.006278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.006687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.006714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.007185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.007213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.007647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.007679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.008091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.008119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.008564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.008592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.009008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.009036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.009490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.009518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.009852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.009880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.010309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.010336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.010771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.010800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.011238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.011264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.011680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.011708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.012140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.012169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.012609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.012636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.415 [2024-07-24 23:17:34.013047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-07-24 23:17:34.013076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.415 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.013441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.013468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.013954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.013983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.014414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.014441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.014690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.014717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.015141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.015169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.015589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.015616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.015935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.015966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.016341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.016368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.016855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.016883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.017276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.017303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.017713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.017741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.018196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.018224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.018650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.018676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.019083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.019111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.019524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.019553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.019992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.020021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.020442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.020469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.020849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.020877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.021312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.021339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.021777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.022103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.022129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.022544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.022571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.023021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.023049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.023487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.023514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.023833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.023861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.024213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.024239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.024676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.024704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.025195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.025230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.416 [2024-07-24 23:17:34.025663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-07-24 23:17:34.025689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.416 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.026098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.026127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.026565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.026593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.027043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.027072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.027486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.027514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.027946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.027974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.028402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.028429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.028863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.028893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.029331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.029359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.029776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.029804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.030332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.030359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.030685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.030712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.031137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.031164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.031606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.031633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.031954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.031985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.032392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.032419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.032840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.032868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.033316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.033343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.033766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.033796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.034327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.034774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.034803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.035131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.035158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.035642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.035670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.036081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.036110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.036547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.036574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.036986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.037087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.037615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.037652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.038022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.038053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.038510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.038538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.038977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.039005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.039444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.039471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.039901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.039932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.040347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.040375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.040825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.040854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.041290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.041317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.041763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.041792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.042224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-07-24 23:17:34.042252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.417 qpair failed and we were unable to recover it. 00:29:16.417 [2024-07-24 23:17:34.042665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.042692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.043153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.043182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.043632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.043666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.044147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.044175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.044608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.044637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.045069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.045098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.045523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.045551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.045884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.045912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.046337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.046363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.046795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.046825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.047116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.047142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.047572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.047599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.047914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.048376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.048404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.048836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.048866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.049301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.049327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.049714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.049742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.050170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.418 [2024-07-24 23:17:34.050199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.418 qpair failed and we were unable to recover it. 00:29:16.418 [2024-07-24 23:17:34.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.050543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.050986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.051014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.051427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.051453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.051775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.051804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.052223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.052250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.052664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.052693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.053178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.053207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.053602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.053631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.054095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.054124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.054444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.054472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.054876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.054925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.055334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.055363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.055810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.055839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.056098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.056531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.056558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.056970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.056999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.057415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.057441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.057865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.057893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.058266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.058294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.058607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.058643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.059050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.059079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.059490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.059517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.059932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.059961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.060399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.060426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.060863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.060899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.061331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.061358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.061688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.061714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.062183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.062212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.062645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.062673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.063082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.063111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.063558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.063585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.063998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.064025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.064284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.064310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.064738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.419 [2024-07-24 23:17:34.064777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.419 qpair failed and we were unable to recover it. 00:29:16.419 [2024-07-24 23:17:34.065219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.065246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.065683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.065709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.066038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.066067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.066394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.066420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.066857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.066887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.067332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.067359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.067797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.067825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.068205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.068232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.068666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.068692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.069110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.069138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.069567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.069593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.069976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.070003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.070438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.070465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.070891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.070921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.071201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.071228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.071711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.071737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.072064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.072093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.072530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.072557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.072974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.073002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.073316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.073793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.073821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.074271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.074298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.074617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.074649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.075065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.075094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.075516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.075543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.075977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.076005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.076323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.076768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.077221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.077248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.077680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.077707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.078147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.078181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.078600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.078627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.420 [2024-07-24 23:17:34.079052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.420 [2024-07-24 23:17:34.079082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.420 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.079521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.079548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.079992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.080020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.080469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.080496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.080904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.080931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.081234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.081265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.081648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.081675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.082132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.082161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.082573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.082599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.082930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.082960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.083373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.083400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.083816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.083844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.084273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.084300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.084746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.084783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.085089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.085116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.085544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.085570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.085988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.086015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.086446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.086474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.086993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.087022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.087434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.087461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.087880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.087909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.088314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.088342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.088666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.088695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.089110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.089139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.089555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.089582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.090009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.090037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.090428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.090455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.090899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.090928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.091373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.091400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.091712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.091743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.092166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.092195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.092611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.092638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.092964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.092991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.421 [2024-07-24 23:17:34.093440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.421 [2024-07-24 23:17:34.093466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.421 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.093722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.093748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.093909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.093937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.094344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.094371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.094779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.094808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.095246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.095720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.095747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.096237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.096265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.096585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.096613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.096931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.096962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.097379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.097406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.097857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.097885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.098172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.098198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.098520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.098549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.098961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.098990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.099401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.099428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.099862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.099889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.100202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.100230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.100673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.101187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.101217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.101649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.101676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.102090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.102117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.102533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.102561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.102982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.103011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.103439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.103466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.103904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.103933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.104387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.104414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.104740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.104776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.105094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.105123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.105559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.105586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.105839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.105868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.106201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.106228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.106553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.106583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.107077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.107106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.107361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.107387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.107792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.107820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.422 qpair failed and we were unable to recover it. 00:29:16.422 [2024-07-24 23:17:34.108261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-07-24 23:17:34.108287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.108725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.108760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.109216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.109243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.109682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.109708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.110155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.110183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.110623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.110649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.111135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.111569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.111597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.112045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.112075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.112505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.112539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.113002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.113030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.113456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.113483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.113926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.113953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.114358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.114385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.114881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.114909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.115305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.115332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.115772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.115801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.116230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.116258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.116703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.116729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.117054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.117085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.117399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.117426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.117623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.117654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.118079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.118107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.118520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.118548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.118970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.118998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.119429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.119456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.119718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-07-24 23:17:34.119745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.423 qpair failed and we were unable to recover it. 00:29:16.423 [2024-07-24 23:17:34.120217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.120245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.120421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.120451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.120892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.120920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.121327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.121354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.121732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.121766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.122106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.122132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.122535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.122561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.122876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.122903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.123352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.123378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.123816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.123846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.124293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.124320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.124746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.124782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.125194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.125222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.125714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.125741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.126173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.126200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.126611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.126638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.127004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.127465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.127493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.127819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.127851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.128275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.128303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.128724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.128759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.129081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.129108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.129554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.129587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.129973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.130001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.130487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.130515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.130827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.130859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.131314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.131343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.131646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.131675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.132086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.132114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.132546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.132572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.133008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.133035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.133453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.133480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.133901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.133929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.134220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.134245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.424 qpair failed and we were unable to recover it. 00:29:16.424 [2024-07-24 23:17:34.134565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-07-24 23:17:34.134594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.134914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.134942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.135383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.135411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.135818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.135846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.136143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.136172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.136585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.136611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.137008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.137035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.137467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.137494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.137921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.138363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.138390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.138839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.138867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.139262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.139289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.139710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.139736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.140183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.140210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.140697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.140723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.141216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.141252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.141558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.141589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.142044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.142071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.142492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.142518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.142941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.142968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.143373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.143400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.143815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.143842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.144296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.144323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.144820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.144848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.145283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.145310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.145598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.145628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.146048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.146076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.146510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.146537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.146963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.146991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.147411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.147438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.147773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.147801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.148200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.148226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.148698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.148726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.149145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.149585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.149611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.150027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.150055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.150487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-07-24 23:17:34.150514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.425 qpair failed and we were unable to recover it. 00:29:16.425 [2024-07-24 23:17:34.150886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.150915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.151329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.151356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.151870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.151897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.152301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.152328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.152780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.152808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.153118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.153147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.153577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.153604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.154021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.154048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.154492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.154519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.154956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.154985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.155369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.155396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.155701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.155728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.156133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.156162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.156648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.156674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.157032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.157060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.157524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.157552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.157968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.157996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.158420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.158447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.158886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.158936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.159243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.159273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.159737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.159790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.160238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.160265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.160677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.160704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.161134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.161162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.161551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.161577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.161881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.161908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.162380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.162407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.162843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.163175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.163203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.163682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.163708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.164034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.164063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.164492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.164518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.164886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.164915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.165368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.165694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.165722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.166171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.166199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.166629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.426 [2024-07-24 23:17:34.166655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.426 qpair failed and we were unable to recover it. 00:29:16.426 [2024-07-24 23:17:34.167109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.167137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.167529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.167556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.167977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.168005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.168418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.168445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.168888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.168916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.169366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.169393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.169828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.169856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.170270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.170297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.170712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.170739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.171164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.171192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.171631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.171658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.172067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.172096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.172541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.172568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.173055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.173083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.173487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.173514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.173947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.173975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.174394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.174420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.174853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.174881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.175305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.175333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.175798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.175827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.176363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.176392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.177024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.177648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.177683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.178090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.178121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.178454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.178485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.178899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.178929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.179232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.179259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.179713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.179740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.180064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.180095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.180502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.180529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.180963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.180993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.181374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.181403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.181724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.181762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.182208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.182235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.182668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.182695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.183032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.183060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.183494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.183521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.183948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.427 [2024-07-24 23:17:34.183977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.427 qpair failed and we were unable to recover it. 00:29:16.427 [2024-07-24 23:17:34.184413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.428 [2024-07-24 23:17:34.184440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.428 qpair failed and we were unable to recover it. 00:29:16.428 [2024-07-24 23:17:34.184898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.428 [2024-07-24 23:17:34.184928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.428 qpair failed and we were unable to recover it. 00:29:16.428 [2024-07-24 23:17:34.185420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.428 [2024-07-24 23:17:34.185448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.428 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.185878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.185912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.186352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.186380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.186817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.186846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.187195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.187223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.187659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.187686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.188106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.188135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.188560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.188587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.188830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.188866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.189211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.189239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.189718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.189745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.190198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.190225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.190645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.190672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.191092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.191120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.191553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.191581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.192026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.192055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.192507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.192534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.192983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.193333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-07-24 23:17:34.193360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-07-24 23:17:34.193645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.193672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.194088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.194116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.194579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.194614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.195030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.195058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.195464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.195490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.195937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.195965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.196383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.196410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.196837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.196866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.197317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.197344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.197784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.197812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.198254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.198281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.198716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.198743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.199172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.199200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.199646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.199673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.200101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.200130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.200557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.200584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.200906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.200935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.201354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.201381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.201813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.201841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.202126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.202155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.202558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.202585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.203064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.203093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.203525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.203552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.204034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.204062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.204458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.204486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.204917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.204947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.205387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.205415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.205854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.205882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.206308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.206335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.206650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.206678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.207110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.207138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.207574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.207601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.208020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.208048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.208467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.208494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.208887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.208916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.209338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.209365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.209576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-07-24 23:17:34.209602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-07-24 23:17:34.210106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.210133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.210453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.210480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.210917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.210945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.211392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.211420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.211865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.211893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.212281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.212321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.212724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.212782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.213108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.213136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.213629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.213656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.213923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.213951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.214364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.214391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.214649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.214675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.215093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.215121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.215557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.215584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.215905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.215934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.216245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.216271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.216693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.216719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.217147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.217177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.217608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.217636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.217951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.217986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.218418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.218445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.218886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.218915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.219059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.219085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.219497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.219524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.219925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.219953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.220240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.220266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.220571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.220598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.221018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.221045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.221465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.221491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.221925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.221953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.222389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.222416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.222869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.222897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.223352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.223380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.223885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.223913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.224220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.224247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.224685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.224711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.225134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-07-24 23:17:34.225162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-07-24 23:17:34.225443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.225469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.225909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.225937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.226350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.226377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.226812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.226840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.227267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.227293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.227727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.227761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.228171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.228198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.228647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.228674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.229092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.229126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.229527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.229554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.229974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.230003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.230369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.230396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.230815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.230843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.231228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.231254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.231666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.231693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.232106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.232135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.232556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.232583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.232788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.232822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.233293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.233319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.233816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.233844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.234278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.234304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.234589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.234618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.235048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.235077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.235365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.235391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.235839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.235867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.236273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.236299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.236713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.236739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.237227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.237255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.237581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.237607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.238022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.238050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.238422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.238449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.238933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.239266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.239292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.239719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.239745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.239954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.239985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.240415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.240442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.240888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.240916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.241350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-07-24 23:17:34.241377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-07-24 23:17:34.241815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.241844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.242132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.242160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.242538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.242565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.242971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.242999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.243492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.243518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.243781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.243812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.244234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.244261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.244697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.244724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.245130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.245556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.245583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.246094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.246203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.246739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.246792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.247225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.247254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.247673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.247699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.248157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.248186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.248446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.248473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.248786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.248820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.249271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.249298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.249612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.249646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.250031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.250061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.250474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.250501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.250920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.250948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.251354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.251382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.251687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.251713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.252187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.252216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.252653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.252679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.253012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.253040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.253483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.253510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.253843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.253874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.254128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.254155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.254576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.254604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.255023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.255051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.255466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.255492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.255907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.255935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.256390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.256417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.256853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.256881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.257326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.257354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-07-24 23:17:34.257812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-07-24 23:17:34.257841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.258290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.258317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.258762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.258790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.259234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.259261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.259683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.259710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.260142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.260170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.260605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.260633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.261038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.261065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.261491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.261517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.261956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.261984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.262418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.262446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.262888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.262936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.263239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.263267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.263702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.263735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.264063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.264091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.264534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.264561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.264979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.265006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.265402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.265429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.265875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.265903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.266341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.266367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.266785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.266813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.267212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.267239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.267666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.267692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.268138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.268167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.268615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.268642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.269118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.269147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.269578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.269605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.270043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.270072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.270487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.270514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.270897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.270927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.271328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-07-24 23:17:34.271354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-07-24 23:17:34.271769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.271797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.272241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.272268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.272702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.272729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.273147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.273174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.273566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.273594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.274019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.274047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.274342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.274368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.274652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.274679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.274958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.274985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.275415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.275443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.275885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.275913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.276334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.276361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.276777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.276805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.277231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.277258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.277692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.277719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.278017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.278046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.278360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.278390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.278806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.278835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.279261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.279288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.279627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.279654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.280088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.280116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.280556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.280583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.281016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.281049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.281504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.281530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.281945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.282474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.282500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.282919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.282948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.283418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.283776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.283804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.284207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.284234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.284670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.284697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.285119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.285148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.285587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.285614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.286053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.286491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.286518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.286936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.286964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.287405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.287432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.287856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-07-24 23:17:34.287884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-07-24 23:17:34.288191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.288223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.288656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.288683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.288984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.289022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.289316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.289345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.289781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.289809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.290208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.290236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.290683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.290710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.291155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.291183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.291466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.291492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.291945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.291974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.292389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.292416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.292741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.292783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.293191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.293218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.293655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.293681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.294153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.294181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.294553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.294579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.294993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.295021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.295455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.295899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.295928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.296359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.296386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.296824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.296852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.297297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.297324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.297768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.297796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.298220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.298247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.298664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.298702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.299185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.299213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.299531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.299558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.299966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.299994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.300430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.300457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.300859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.300887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.301347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.301812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.301839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.302275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.302302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.302740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.302775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.303194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.303221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.303715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.303741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.304151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.304179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-07-24 23:17:34.304648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-07-24 23:17:34.304674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.305075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.305103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.305510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.305537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.305925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.306234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.306261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.306691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.306716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.307149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.307178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.307616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.307642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.308055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.308082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.308506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.308532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.308953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.308982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.309409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.309437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.309868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.309897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.310226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.310255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.310670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.310698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.311030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.311062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.311427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.311454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.311781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.311809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.312243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.312270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.312707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.313201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.313229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.313633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.313660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.314065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.314093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.314533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.314560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.314983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.315011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.315426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.315451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.315873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.315901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.316332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.316365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.316784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.316812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.317240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.317266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.317575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.317604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.317804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.317837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.318250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.318279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.318652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.318679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.319107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.319136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.319560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.319587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.320003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.320031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.320440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.320467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-07-24 23:17:34.320913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-07-24 23:17:34.320940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.321361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.321388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.321743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.321781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.322229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.322256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.322674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.322700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.323028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.323056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.323465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.323493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.323858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.323887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.324347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.324374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.324800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.324828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.325274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.325300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.325716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.325743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.326134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.326161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.326560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.326587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.326999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.327026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.327449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.327476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.327790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.327819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.328222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.328249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.328628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.328654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.329112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.329139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.329501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.329527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.329960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.329988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.330426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.330453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.330762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.330794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.331247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.331273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.331530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.331556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.331806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.331835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.332296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.332323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.332740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.332777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.333325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.333358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.333774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.333802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.334248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.334275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-07-24 23:17:34.334696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-07-24 23:17:34.334722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.335040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.335067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.335454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.335480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.335947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.335976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.336299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.336326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.336704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.336731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.337185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.337212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.337652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.337679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.338178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.338206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.338521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.338548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.338844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.338872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.339297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.339324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.339643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.339669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.340077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.340105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.340545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.340572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.341015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.341044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.341459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.341485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.341686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.341718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.342217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.342245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.342664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.342691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.343114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.343142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.343573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.343599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.343939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.344366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.344393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.344786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.344814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.345268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.345296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.345651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.345677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.346115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.346143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.346533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.346560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.346992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.347020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.347469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.347495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.347917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.347945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.348364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.348391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.348826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.348853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.349268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.349295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.349710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.349737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.350198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.350225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.350659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-07-24 23:17:34.350692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-07-24 23:17:34.351061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.351090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.351276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.351306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.351706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.351733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.352143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.352171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.352567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.352593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.352939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.352968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.353479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.353505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.353916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.353944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.354359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.354386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.354786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.354813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.355245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.355271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.355683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.355711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.356027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.356054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.356476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.356503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.356920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.356948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.357388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.357416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.357800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.357828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.358203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.358624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.358651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.359075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.359103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.359462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.359489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.359807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.359834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.360282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.360308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.360741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.360778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.361185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.361212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.361498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.361525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.361927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.361960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.362414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.362441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.362869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.362897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.363335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.363362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.363882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.364292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.364319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.364765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.364794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.365134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.365160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.365596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.365622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.366046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.366074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.366496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.366523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.367053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-07-24 23:17:34.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-07-24 23:17:34.367527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.367554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.367988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.368016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.368453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.368480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.368898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.368926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.369372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.369399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.369847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.369876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.370324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.370351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.370668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.370694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.371114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.371141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.371580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.371607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.371924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.371951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.372368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.372394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.372714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.372744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.373162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.373189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.373636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.373662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.374081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.374110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.374441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.374468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.374891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.374919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.375234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.375265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.375746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.375784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.376258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.376285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.376602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.376632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.376921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.376949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.377231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.377261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.377678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.377706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.378036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.378065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.378483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.378934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.378963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.379374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.379407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.379683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.379710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.380149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.380177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.380500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.380529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.380794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.380821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.381249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.381275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.381708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.381734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.382172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.382200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.382620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-07-24 23:17:34.382647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-07-24 23:17:34.383066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.383094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.383528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.383555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.383975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.384003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.384420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.384446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.384882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.384910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.385212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.385239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.385748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.385784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.386177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.386203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.386636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.386662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.387075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.387103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.387521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.387967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.387994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.388364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.388390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.388565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.388595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.389009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.389036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.389489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.389516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.389860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.389888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.390267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.390714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.390742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.391187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.391214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.391650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.391678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.392109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.392139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.392551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.392577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.393029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.393057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.393488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.393514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.393899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.393927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.394235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.394569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.394595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.394914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.394942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.395286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.395314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.395611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.395641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.396091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.396128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.396534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.396562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.396995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.397022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.397441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.397467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.397728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.397765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.398192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.398219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.398662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-07-24 23:17:34.398689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-07-24 23:17:34.399114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.399143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.399591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.399618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.399944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.399973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.400409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.400857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.400886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.401300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.401328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.401773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.401802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.402254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.402281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.402706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.402733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.403153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.403180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.403537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.403564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.404000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.404029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.404457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.404483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.404896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.404924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.405306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.405333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.405750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.405788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.406230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.406256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.406675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.406701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.407066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.407094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.407406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.407433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.407887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.407915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.408374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.408400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.408839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.408866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.409276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.409302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.409494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.409521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.409922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.409950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.410363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.410389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.410822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.410850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.411268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.411295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.411592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.411622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.412084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.412113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.412429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.412455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-07-24 23:17:34.412873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-07-24 23:17:34.412900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.413334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.413367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.413781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.413808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.414233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.414259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.414710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.414736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.415149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.415177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.415626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.415652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.416060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.416089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.416501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.416528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.416974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.417002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.417434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.417461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.417778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.417810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.418258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.418286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.418707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.419164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.419192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.419511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.419538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.420029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.420058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.420476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.420503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.420830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.420860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.421290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.421317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.421732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.421768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.422173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.422200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.422513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.422540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.422956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.422988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.423432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.423459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.423886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.423915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.424441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.424469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.424875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.424903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.425329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.425355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.425672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.425701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.426128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.426157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.426591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.426618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.426943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.426972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.427387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.427414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.427846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.427875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.428317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.428345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.428775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.428803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.429248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.429275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-07-24 23:17:34.429723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-07-24 23:17:34.429749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.430197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.430226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.430715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.430741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.431172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.431205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.431648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.431675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.432078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.432108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.432551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.432579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.433028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.433057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.433487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.433514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.433926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.433955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.434388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.434414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.434832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.434860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.435259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.435286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.435686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.435713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.436213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.436241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.436558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.436588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.436961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.436990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.437423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.437451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.437874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.437902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.438285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.438312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.438741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.438777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.439224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.439251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.439667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.439693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.440113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.440141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.440563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.440591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.441003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.441032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.441419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.441446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.441882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.441909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.442358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.442385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.442820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.442848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.443302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.443330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.443744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.443780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.444230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.444258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.444686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.444713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.445170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.445198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.445643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.445670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.446083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.446111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-07-24 23:17:34.446429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-07-24 23:17:34.446456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.446839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.446867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.447156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.447183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.447470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.447499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.447929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.447957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.448362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.448389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.448725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.448765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.449064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.449091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.449518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.449544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.449870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.449898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.450307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.450334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.450634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.450664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.451071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.451099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.451517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.451544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.451971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.451999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.452431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.452458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.452895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.452923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.453362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.453389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.453690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.453721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.454186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.454215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.454650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.454677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.455182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.455211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.455657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.455684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.455999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.456027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.456446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.456473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.456897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.456925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.457240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.457266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.457699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.457725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.458182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.458210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.458534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.458561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.458975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.459004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.459433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.459460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.459896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.459924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.460304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.460331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.460769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.461314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.461341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.461817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.461846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.462325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-07-24 23:17:34.462352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-07-24 23:17:34.462789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.462818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.463226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.463252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.463690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.463716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.464022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.464049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.464469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.464496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.464936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.464964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.465243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.465271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.465695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.465721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.466210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.466244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.466666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.466693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.467024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.467053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.467465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.467492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.467909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.467937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.468202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.468229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.468679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.468706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.469121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.469148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.469568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.469595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.470012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.470040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.470363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.470392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.470798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.470845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.471272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.471300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.471609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.471638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.472048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.472077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.472500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.472527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.472951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.472979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.473427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.473454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.473889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.473917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.474354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.474381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.474708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.474734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-07-24 23:17:34.475163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-07-24 23:17:34.475191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.475626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.475656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.476112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.476140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.476557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.476584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.477005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.477034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.477474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.477501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.477867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.477896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.478205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.478235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.478670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.478697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.479141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.479169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.479602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.479629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.479937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.479965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.480401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.480429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.480866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.481328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.481355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.481843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.481871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.482287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.482314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.482771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.482799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.483235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.483261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.483706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.483739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.484156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.484184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.484613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.484640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.485091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.485119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.485563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.485590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.486031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.486059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.486470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.486498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.487025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.487055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.487454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.487480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.487948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.487976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.488414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.488443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.993 qpair failed and we were unable to recover it. 00:29:16.993 [2024-07-24 23:17:34.488869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.993 [2024-07-24 23:17:34.488898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.489331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.489359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.489801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.489830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.490165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.490192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.490557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.490588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.490905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.490935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.491366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.491394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.491839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.491867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.492269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.492295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.492700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.492727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.493148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.493176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.493629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.493655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.493977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.494005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.494183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.494214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.494648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.494675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.495085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.495113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.495531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.495559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.495886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.495916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.496353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.496380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.496795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.496823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.497239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.497266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.497684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.497711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.498231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.498261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.498687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.498713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.499157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.499186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.499516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.499949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.499977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.500418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.500445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.500858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.500887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.501325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.501359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.501724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.501758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.502057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.502084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.502504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.502530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.502986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.503014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.503306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.503333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.503665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.504114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.504143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.994 qpair failed and we were unable to recover it. 00:29:16.994 [2024-07-24 23:17:34.504590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.994 [2024-07-24 23:17:34.504618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.505055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.505082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.505456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.505482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.505941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.505968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.506387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.506414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.506882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.506910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.507319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.507346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.507791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.507820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.508226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.508252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.508697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.508723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.509097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.509124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.509532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.509560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.509926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.509954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.510431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.510457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.510867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.510895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.511328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.511355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.511783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.511812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.512226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.512253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.512690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.512716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.513027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.513055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.513462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.513929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.514414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.514440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.514862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.514890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.515380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.515406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.515838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.515865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.516311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.516337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.516776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.516804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.517218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.517245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.517565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.517595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.518083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.518110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.518563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.518589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.519005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.519039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.519444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.519470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.519911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.519940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.520370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.520396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.520801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.995 [2024-07-24 23:17:34.521265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.995 [2024-07-24 23:17:34.521292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.995 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.521693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.521719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.522174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.522201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.522567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.522593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.523005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.523033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.523470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.523497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.523913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.523944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.524360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.524386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.524817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.524845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.525217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.525245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.525701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.526146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.526174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.526626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.526652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.527079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.527106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.527434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.527460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.527740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.527776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.528083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.528110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.528532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.528559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.528973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.529000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.529313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.529339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.529644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.529671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.529959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.529987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.530503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.530531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.530851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.530882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.531331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.531358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.531791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.531819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.532240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.532267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.532596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.532622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.532950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.532978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.533416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.533442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.533879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.533906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.534272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.534298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.534729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.534764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.535215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.535242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.535575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.535604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.536027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.536062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.996 [2024-07-24 23:17:34.536477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.996 [2024-07-24 23:17:34.536505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.996 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.536930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.536959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.537279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.537306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.537719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.537745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.538161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.538188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.538506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.538535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.538986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.539013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.539432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.539459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.539739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.539785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.540244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.540273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.540691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.540719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.541154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.541183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.541441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.541469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.541911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.541940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.542177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.542206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.542505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.542537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.542943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.542972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.543414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.543441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.543780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.543809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.544236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.544263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.544592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.544619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.544833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.544863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.545307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.545334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.545734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.545772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.546190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.546217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.546630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.546656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.547057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.547410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.547436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.547765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.547793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.548288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.997 [2024-07-24 23:17:34.548315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.997 qpair failed and we were unable to recover it. 00:29:16.997 [2024-07-24 23:17:34.548767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.548794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.549225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.549252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.549640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.549666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.550161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.550189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.550604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.550631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.551052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.551080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.551512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.551539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.551844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.551875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.552299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.552327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.552748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.552794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.553165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.553191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.553625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.553652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.554107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.554135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.554494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.554522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.554819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.554868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.555326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.555353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.555777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.555805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.556276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.556303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.556668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.556694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.557147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.557176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.557592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.557618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.557932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.558391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.558417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.558860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.559324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.559351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.559771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.559800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.560233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.560260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.560584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.560610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.560931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.560959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.561275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.561304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.561736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.561772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.562186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.562212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.562638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.562664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.563132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.563161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.563596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.563623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.564046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.998 [2024-07-24 23:17:34.564074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-07-24 23:17:34.564493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.564520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.564846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.564880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.565344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.565370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.565807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.565835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.566260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.566286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.566701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.566728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.567151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.567179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.567649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.567676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.568114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.568143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.568562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.568589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.569012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.569041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.569471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.569498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.569910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.569937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.570353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.570386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.570797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.570825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.571266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.571292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.571605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.571634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.572070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.572098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.572536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.572562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.573001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.573028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.573463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.573490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.573908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.573936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.574274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.574301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.574742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.575231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.575258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.575674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.575700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.576142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.576170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.576604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.576631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.577088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.577117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.577540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.577567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.577999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.578027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.578462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.578488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.578896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.578923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.579247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.579277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.579699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.579725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.580142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.580169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.580377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.999 [2024-07-24 23:17:34.580404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-07-24 23:17:34.580804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.580833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.581268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.581295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.581581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.581610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.581913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.581941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.582238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.582264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.582580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.582607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.582930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.582958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.583356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.583383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.583833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.583860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.584272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.584299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.584677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.584703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.585211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.585239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.585690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.585716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.586158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.586186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.586474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.586500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.586935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.587380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.587407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.587844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.587873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.588294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.588320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.588734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.588769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.589173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.589199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.589635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.589661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.590072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.590099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.590524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.590551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.590963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.591237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.591267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.591694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.592101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.592128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.592543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.592569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.593006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.593422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.593449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.593856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.593884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-07-24 23:17:34.594299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.000 [2024-07-24 23:17:34.594325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.594582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.594608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.595069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.595097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.595511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.595538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.595852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.595880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.596311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.596338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.596780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.596809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.597240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.597266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.597622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.597939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.597966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.598388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.598414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.598831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.598866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.599286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.599312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.599759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.599787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.600251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.600277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.600665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.600691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.601092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.601120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.601556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.601582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.601987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.602015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.602343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.602369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.602818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.602847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.603240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.603267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.603720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.603746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.604068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.604095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.604510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.604536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.604792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.604820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.605144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.605177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.605607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.605634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.606178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.606205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.606674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.606701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.607127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.607155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.607569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.607595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.608043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.608071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.608501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.608527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.608843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.608873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.609309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.609336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.609771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.609800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.610141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.001 [2024-07-24 23:17:34.610168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.001 qpair failed and we were unable to recover it. 00:29:17.001 [2024-07-24 23:17:34.610486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.610513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.610937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.610964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.611279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.611308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.611742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.611781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.612228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.612255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.612674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.612700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.613189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.613217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.613522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.613552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.613972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.614000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.614438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.614465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.614886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.614914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.615352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.615379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.615809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.615837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.616350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.616384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.616859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.616887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.617303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.617330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.617775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.617803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.618127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.618153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.618483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.618510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.618929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.618957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.619271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.619297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.619719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.619746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.620107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.620135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.620536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.620562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.621000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.002 [2024-07-24 23:17:34.621028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.002 qpair failed and we were unable to recover it. 00:29:17.002 [2024-07-24 23:17:34.621477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.621504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.621925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.621954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.622385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.622414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.622850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.622878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.623224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.623251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.623650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.623676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.624130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.624157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.624515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.624541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.624998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.625026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.625441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.625467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.625896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.625924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.626237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.626265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.626722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.626748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.627170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.627197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.627634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.627661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.628160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.628188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.628632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.628659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.628974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.629004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.629390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.629417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.629822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.629850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.630276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.630303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.630729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.630762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.631174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.631200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.631637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.631664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.632080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.632107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.632525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.632551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.632984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.633011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.633448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.633474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.633891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.633924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.634399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.634425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.634860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.634887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.635324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.635350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.635776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.635804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.636257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.636284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.003 qpair failed and we were unable to recover it. 00:29:17.003 [2024-07-24 23:17:34.636579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.003 [2024-07-24 23:17:34.636610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.637034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.637062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.637480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.637509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1046774 Killed "${NVMF_APP[@]}" "$@" 00:29:17.004 [2024-07-24 23:17:34.637958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.637987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.638459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.638486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.638716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.638742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:17.004 [2024-07-24 23:17:34.639166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.639193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b9 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:17.004 0 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.639352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.639379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.004 [2024-07-24 23:17:34.639852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.639880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.004 [2024-07-24 23:17:34.640162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.640188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.640495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.640524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.641014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.641042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.641460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.641489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.641908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.642360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.642387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.642698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.642729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.643181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.643209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.643656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.643683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.644166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.644206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.644638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.644667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.645074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.645104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.645619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.645648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.645946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.645978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.646436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.646463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.646848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.646876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.647310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.647337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 [2024-07-24 23:17:34.647713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.647740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1047798 00:29:17.004 [2024-07-24 23:17:34.648200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.648231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1047798 00:29:17.004 [2024-07-24 23:17:34.648655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 [2024-07-24 23:17:34.648684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1047798 ']' 00:29:17.004 [2024-07-24 23:17:34.649100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.004 [2024-07-24 23:17:34.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.004 qpair failed and we were unable to recover it. 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.004 [2024-07-24 23:17:34.649584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.004 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.005 [2024-07-24 23:17:34.649613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.005 [2024-07-24 23:17:34.649882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.649913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 23:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.005 [2024-07-24 23:17:34.650332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.650362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.650627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.650655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.651069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.651097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.651512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.651539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.651978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.652008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.652433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.652464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.652761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.652795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.653095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.653125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.653629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.653665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.654077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.654107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.654536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.654563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.654994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.655022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.655344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.655373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.655799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.655829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.656298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.656327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.656775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.656804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.657127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.657159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.657585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.657613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.658082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.658111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.658529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.658556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.658972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.658999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.659437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.659464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.659883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.660344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.660371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.660829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.660858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.661317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.661344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.661776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.661804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.662253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.662280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.662596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.662622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.663047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.663076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.663397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.663426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.663705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.663732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.005 qpair failed and we were unable to recover it. 00:29:17.005 [2024-07-24 23:17:34.664262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.005 [2024-07-24 23:17:34.664290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.664698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.664724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.665179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.665207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.665621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.665648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.666059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.666088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.666518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.666546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.666970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.666998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.667392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.667420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.667857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.667885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.668271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.668298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.668740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.668777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.669225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.669252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.669689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.669716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.670168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.670196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.670616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.670644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.671104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.671133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.671564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.671598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.671885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.671913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.672360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.672386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.672802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.672831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.673240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.673266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.673708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.673735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.674241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.674271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.674713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.674741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.675215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.675244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.675685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.675712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.676138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.676167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.676599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.676628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.677088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.677116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.677554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.677581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.677916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.677945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.678382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.678410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.678823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.678851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.679292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.679320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.679739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.679777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.006 qpair failed and we were unable to recover it. 00:29:17.006 [2024-07-24 23:17:34.680250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.006 [2024-07-24 23:17:34.680279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.680713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.680741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.681197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.681224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.681643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.681670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.681927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.681956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.682413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.682443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.682888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.682916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.683360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.683387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.683898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.683929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.684386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.684413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.684730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.684770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.685135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.685163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.685583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.685610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.686019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.686048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.686486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.686512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.686937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.686966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.687384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.687410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.687726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.687762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.688202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.688229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.688625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.688653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.689091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.689118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.689613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.689961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.690123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.690148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.690551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.690578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.690908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.690936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.691371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.691398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.691845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.691872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.692292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.692319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.692689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.692716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.693200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.693228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.693645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.693671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.007 qpair failed and we were unable to recover it. 00:29:17.007 [2024-07-24 23:17:34.694123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.007 [2024-07-24 23:17:34.694151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.694537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.694564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.694813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.694843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.695304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.695331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.695745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.695783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.696232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.696259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.696700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.696727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.697153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.697181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.697596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.697622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.698077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.698543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.698570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.698982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.699011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.699456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.699484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.699918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.699946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.700192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.700219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.700638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.700664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.700978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.701007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.701433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.701460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.701895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.701924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.702243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.702271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.702687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.702715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.703206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.703235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.703644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.703671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.704121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.704149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.704569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.704597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.704666] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:17.008 [2024-07-24 23:17:34.704720] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.008 [2024-07-24 23:17:34.705017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.705048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.705477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.705504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.705910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.705940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.706430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.706459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.706882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.706910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.707337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.707365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.707806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.707836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.708282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.708737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.708772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.709214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.709241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.709684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.709711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.008 qpair failed and we were unable to recover it. 00:29:17.008 [2024-07-24 23:17:34.710073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.008 [2024-07-24 23:17:34.710103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.710551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.710578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.711001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.711032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.711424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.711451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.711892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.711920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.712352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.712386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.712683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.712712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.713159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.713188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.713509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.713537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.713865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.713894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.714347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.714374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.714693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.714720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.715169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.715198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.715453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.715481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.715902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.715930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.716363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.716390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.716811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.716839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.717233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.717259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.717649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.717676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.718046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.718074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.718487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.718515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.718943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.718971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.719404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.719430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.719848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.719876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.720324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.720351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.720772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.720800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.721207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.721235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.721646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.721673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.721987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.722015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.722453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.722480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.722788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.722816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.723249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.723276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.723593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.723622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.723945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.723974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.724508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.724535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.009 qpair failed and we were unable to recover it. 00:29:17.009 [2024-07-24 23:17:34.724990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.009 [2024-07-24 23:17:34.725018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.725438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.725464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.725901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.725930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.726365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.726391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.726826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.726854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.727274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.727302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.727773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.727802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.728241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.728270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.728715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.728743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.729199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.729226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.729659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.729694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.730047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.730076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.730468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.730495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.730834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.730863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.731180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.731208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.731641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.731668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.732087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.732117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.732550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.732578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.733002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.733031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.733481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.733509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.733934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.733964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.734330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.734358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.734792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.734822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.735256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.735283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.735773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.735801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.010 [2024-07-24 23:17:34.736197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.736226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.736666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.736693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.737131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.737159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.737565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.737591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.738003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.738030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.738450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.738477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.738899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.738928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.739448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.739476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.739801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.739829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.740205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.740232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.740666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.740692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.741116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.010 [2024-07-24 23:17:34.741144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.010 qpair failed and we were unable to recover it. 00:29:17.010 [2024-07-24 23:17:34.741453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.741487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.741929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.741958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.742440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.742466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.742864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.742893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.743288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.743317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.743527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.743556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.743961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.743991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.744433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.744461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.744907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.744934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.745377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.745404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.745794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.745821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.746222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.746664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.746690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.746996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.747032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.747347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.747375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.747704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.747733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.748189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.748219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.748631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.748659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.749073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.749102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.749413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.749445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.749898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.749928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.750391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.750419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.750778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.750810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.751220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.751247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.751697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.751724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.752139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.752167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.752482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.752508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.752932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.752960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.753276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.753302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.753735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.753771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.753931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.753960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.754373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.754401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.754730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.754782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.755224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.755251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.755692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.755720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.756213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.011 [2024-07-24 23:17:34.756241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.011 qpair failed and we were unable to recover it. 00:29:17.011 [2024-07-24 23:17:34.756539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.756569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.756996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.757025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.757459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.757487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.757934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.758276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.758306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.758770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.758799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.759234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.759261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.759662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.759689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.760096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.760125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.760519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.760546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.761067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.761166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.761691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.761728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.762211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.762242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.762565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.762593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.763003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.763033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.012 [2024-07-24 23:17:34.763349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.012 [2024-07-24 23:17:34.763378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.012 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.763873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.763905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.764268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.764307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.764576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.764604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.765084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.765114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.765484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.765512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.765947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.765975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.766406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.766433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.766849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.766877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.767322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.767349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.767784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.299 [2024-07-24 23:17:34.767814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.299 qpair failed and we were unable to recover it. 00:29:17.299 [2024-07-24 23:17:34.768258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.768285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.768702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.768729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.769252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.769283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.769707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.769737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.770199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.770228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.770670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.770698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.771022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.771052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.771511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.771540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.771983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.772012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.772429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.772457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.772902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.772932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.773396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.773424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.773960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.773988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.774380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.774407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.774803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.774833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.775150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.775178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.775615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.775643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.775941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.775970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.776423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.776451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.776869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.776897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.777232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.777259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.777655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.777682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.778134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.778163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.778484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.778518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.779029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.779059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.779399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.779426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.779912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.779941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.780232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.780260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.780703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.780730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.781199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.781227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.781461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.781488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.781793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.781832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.300 [2024-07-24 23:17:34.782227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.300 [2024-07-24 23:17:34.782254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.300 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.782678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.782706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.783028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.783057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.783526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.783554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.783978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.784008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.784332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.784360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.784683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.784710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.785180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.785211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.785632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.785660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.786003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.786031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.786353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.786381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.786806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.786835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.787288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.787316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.787746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.787788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.788240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.788267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.788731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.788770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.789351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.789379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.789695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.789722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.790317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.790347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.790789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.790819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.791252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.791281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.791697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.791724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.792184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.792213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.792600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.792628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.793056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.301 [2024-07-24 23:17:34.793076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.793104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.793528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.793557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.793967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.793996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.794412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.794440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.794837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.794866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.795243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.795274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.795588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.795616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.795943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.796411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.796438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.796861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.796889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.301 [2024-07-24 23:17:34.797327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.301 [2024-07-24 23:17:34.797354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.301 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.797779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.797807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.798251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.798279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.798595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.798622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.799054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.799083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.799465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.799493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.799863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.799892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.800255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.800282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.800702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.800729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.801065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.801093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.801535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.801562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.801892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.801921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.802321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.802348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.802774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.802803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.803308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.803335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.803783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.803811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.804232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.804259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.804678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.804704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.805083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.805117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.805564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.805590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.806077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.806105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.806558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.807010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.807039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.807488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.807515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.808080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.808179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.808593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.808634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.809070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.809101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.809539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.809567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.809996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.810024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.810476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.810504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.810927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.810955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.811376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.811403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.811688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.811715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.812261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.812291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.812565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.812591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.813040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.813070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.302 [2024-07-24 23:17:34.813488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.302 [2024-07-24 23:17:34.813516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.302 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.813831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.813859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.814305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.814331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.814748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.814805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.815257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.815285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.815589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.815616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.816043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.816498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.816524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.816948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.816976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.817459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.817489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.817988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.818017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.818335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.818361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.818788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.818816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.819232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.819258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.819534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.819560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.819958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.819986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.820438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.820465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.820903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.820932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.821250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.821280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.821763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.821791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.822129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.822161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.822532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.822559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.823003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.823045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.823476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.823503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.823833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.823861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.824293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.824319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.824740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.824804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.825187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.825213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.825731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.825771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.826025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.826052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.826493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.826519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.826945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.826974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.827402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.303 [2024-07-24 23:17:34.827429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.303 qpair failed and we were unable to recover it. 00:29:17.303 [2024-07-24 23:17:34.827851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.827879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.828333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.828359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.828778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.828806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.829259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.829285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.829709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.829735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.830055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.830086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.830536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.830563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.830828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.830857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.831273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.831300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.831720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.831748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.832228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.832256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.832583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.832610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.832945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.832977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.833403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.833429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.833848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.833877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.834296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.834322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.834740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.834789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.835232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.835259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.835709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.835736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.836130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.836158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.836641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.836669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.837068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.837097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.837524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.837551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.837976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.838004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.838466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.838492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.838915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.838942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.839324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.839352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.839812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.839840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.840151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.840178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.304 qpair failed and we were unable to recover it. 00:29:17.304 [2024-07-24 23:17:34.840620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.304 [2024-07-24 23:17:34.840654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.840993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.841021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.841475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.841501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.841823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.841851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.842266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.842293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.842701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.842729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.843225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.843255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.843571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.843603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.844015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.844044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.844470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.844954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.844984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.845397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.845424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.845842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.845870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.846295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.846322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.846690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.846718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.847167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.847197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.847581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.847610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.848023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.848052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.848466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.848493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.848911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.849375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.849403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.849733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.849772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.850163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.850191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.850596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.850622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.851044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.851072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.851489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.851516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.851934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.851962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.852336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.852369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.852779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.852807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.853265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.853292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.853709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.853736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.854210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.854239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.854558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.854585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.855034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.855063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.305 qpair failed and we were unable to recover it. 00:29:17.305 [2024-07-24 23:17:34.855502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.305 [2024-07-24 23:17:34.855528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.855947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.855977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.856420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.856447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.856863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.856892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.857246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.857272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.857675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.857702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.858152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.858181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.858483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.858510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.858824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.858855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.859298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.859325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.859683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.859710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.859975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.860009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.860448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.860904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.860932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.861369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.861395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.861813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.861841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.862247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.862273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.862710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.862738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.863238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.863265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.863682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.863710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.863916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.863946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.864372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.864400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.864833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.864862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.865157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.865187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.865609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.865636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.865957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.865989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.866418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.866445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.866863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.866890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.867331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.867359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.867651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.867681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.868028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.868056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.868409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.868436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.868825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.868853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.869267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.869301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.869714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.869741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-07-24 23:17:34.870196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.306 [2024-07-24 23:17:34.870223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.870634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.870660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.871075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.871103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.871549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.871576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.872017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.872045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.872374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.872401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.872875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.872904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.873346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.873373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.873848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.873877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.874328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.874355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.874784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.874811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.875151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.875179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.875591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.875618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.876046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.876074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.876516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.876543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.876878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.876906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.877353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.877796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.877823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.878286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.878313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.878775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.878804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.879243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.879270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.879704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.879731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.880159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.880188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.880593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.880619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.881040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.881068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.881502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.881529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.881964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.881992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.882431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.882458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.882875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.882903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.883309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.883336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.883770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.883799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.884231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.884258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.884573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.884600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-07-24 23:17:34.885061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.307 [2024-07-24 23:17:34.885089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.885527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.885554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.885970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.885997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.886394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.886421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.886838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.886896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.887322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.887355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.887773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.887802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.888235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.888262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.888585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.888612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.888624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.308 [2024-07-24 23:17:34.888671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.308 [2024-07-24 23:17:34.888679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.308 [2024-07-24 23:17:34.888686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.308 [2024-07-24 23:17:34.888692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.308 [2024-07-24 23:17:34.888856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:17.308 [2024-07-24 23:17:34.889052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.889081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.889193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:17.308 [2024-07-24 23:17:34.889412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.889438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.889333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:17.308 [2024-07-24 23:17:34.889335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.308 [2024-07-24 23:17:34.889795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.889823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.890340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.890366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.890770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.890798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.891255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.891282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.891703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.891734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.892056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.892498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.892525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.892842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.892876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.893199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.893226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.893646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.894156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.894184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.894621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.894647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.895088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.895116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.895612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.895639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.895957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.895985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.896433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.896460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.896882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.896910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.897245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.897271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.897691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.897719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.898054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.898082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.898481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.898507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-07-24 23:17:34.898905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.308 [2024-07-24 23:17:34.898933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.899270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.899299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.899745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.899783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.900205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.900231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.900552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.900581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.901039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.901069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.901497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.901524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.902043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.902071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.902520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.902547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.903109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.903213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.903545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.903581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.904056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.904086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.904526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.904553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.904874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.904905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.905355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.905382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.905803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.905832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.906239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.906266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.906687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.906714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.907054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.907084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.907530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.907560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.907982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.908010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.908429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.908457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.908878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.908907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.909227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.909680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.909707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.910077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.910106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.910493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.910520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.910828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.910856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.911198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.911225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.911663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.911690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-07-24 23:17:34.912028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.309 [2024-07-24 23:17:34.912060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.912499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.912526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.912966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.912995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.913408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.913437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.913807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.913835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.914258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.914284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.914564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.914591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.914871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.914900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.915182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.915208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.915500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.915526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.915866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.915894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.916309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.916336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.916606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.916632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.916999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.917028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.917472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.917500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.917949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.917977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.918417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.918444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.918870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.918898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.919353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.919379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.919801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.919830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.920146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.920174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.920495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.920522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.920966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.920995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.921326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.921352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.921717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.921744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.922228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.922256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.922418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.922885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.922912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.923328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.923355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.923775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.923802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.924098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.924124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.924544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.924570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.925006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.925034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.925458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.925491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.925912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.925941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.926391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.310 [2024-07-24 23:17:34.926417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-07-24 23:17:34.926681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.926709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.927195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.927224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.927503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.927530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.927811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.927839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.928263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.928289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.928659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.928686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.929006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.929035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.929453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.929480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.929919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.929947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.930283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.930732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.930767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.931212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.931240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.931723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.931762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.932040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.932067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.932503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.932530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.932975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.933004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.933445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.933473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.933803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.933831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.934326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.934353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.934770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.934797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.935213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.935240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.935660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.935686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.936137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.936166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.936495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.936525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.936949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.936978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.937329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.937356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.937675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.937701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.938118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.938146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.938585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.938612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.938931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.938959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.939416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.939443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.939746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.939792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.940234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.940261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.940679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.940706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.941031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.941059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.311 [2024-07-24 23:17:34.941365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.311 [2024-07-24 23:17:34.941392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.311 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.941812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.941840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.942231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.942265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.942530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.942557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.942865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.942893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.943312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.943339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.943761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.943788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.944039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.944066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.944475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.944501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.945025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.945054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.945506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.945533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.945983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.946012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.946311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.946337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.946587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.946613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.946874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.946901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.947232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.947265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.947705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.947732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.948188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.948217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.948553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.949021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.949049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.949295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.949322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.949568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.949596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.950011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.950040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.950488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.950516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.950818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.950847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.951286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.951314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.951589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.951615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.952054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.952082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.952343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.952371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.952826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.952855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.953148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.953178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.953498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.953526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.953944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.953974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.954419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.954446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.954864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.954892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.955350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.955377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.312 [2024-07-24 23:17:34.955787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.312 [2024-07-24 23:17:34.955815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.312 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.956135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.956163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.956451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.956479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.956931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.956959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.957375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.957401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.957819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.957847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.958268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.958301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.958692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.958719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.959142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.959171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.959550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.959576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.960043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.960488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.960516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.960780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.960808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.961232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.961258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.961572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.961604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.962042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.962071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.962484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.962511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.962986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.963014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.963438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.963465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.963711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.963737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.964191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.964219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.964512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.964543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.965021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.965050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.965479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.965505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.965924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.965952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.966385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.966411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.966742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.966781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.967070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.967096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.967510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.967536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.967783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.967810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.968271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.313 [2024-07-24 23:17:34.968298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.313 qpair failed and we were unable to recover it. 00:29:17.313 [2024-07-24 23:17:34.968568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.968595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.969041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.969069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.969516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.969544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.969965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.969993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.970405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.970432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.970883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.970911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.971346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.971373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.971765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.971793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.972251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.972277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.972785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.972813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.973126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.973153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.973430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.973457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.973798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.973828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.974195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.974223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.974668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.974694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.975127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.975154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.975566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.975593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.975872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.975900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.976324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.976352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.976780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.976808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.977139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.977169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.977506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.977533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.977953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.977981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.978405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.978431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.978613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.978643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.978868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.978895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.979173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.979200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.979614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.979641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.979905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.979932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.980369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.980396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.980828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.314 [2024-07-24 23:17:34.980855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.314 qpair failed and we were unable to recover it. 00:29:17.314 [2024-07-24 23:17:34.981266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.981293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.981711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.981737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.982056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.982087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.982512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.982538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.982858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.982887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.983293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.983320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.983762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.983791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.984199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.984226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.984580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.984606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.985055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.985083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.985335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.985362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.985661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.985694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.986160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.986188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.986636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.986663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.987137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.987167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.987476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.987503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.987919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.987947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.988362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.988389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.988813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.988841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.989109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.989135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.989549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.989575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.989840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.989867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.990264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.990291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.990731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.990769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.991199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.991227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.991654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.991681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.992119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.992147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.992449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.992476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.992892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.992919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.993202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.993229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.993464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.993491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.993778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.993805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.994228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.994255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.994670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.994697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.315 [2024-07-24 23:17:34.995113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.315 [2024-07-24 23:17:34.995141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.315 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.995379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.995406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.995838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.995866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.996281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.996308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.996724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.996761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.996893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.996919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.997352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.997379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.997659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.998127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.998155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.998513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.998540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.998858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.998885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.999169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.999196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.999448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.999474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:34.999905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:34.999934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.000427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.000454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.000870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.000899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.001205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.001232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.001529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.001566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.001992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.002020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.002279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.002306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.002422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.002447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.002893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.002921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.003340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.003367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.003786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.003814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.004229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.004256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.004694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.004721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.005149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.005177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.005547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.005573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.005981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.006009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.006138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.006164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.006440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.006898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.006926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.007341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.007368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.007874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.007902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.008140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.008166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.008564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.008592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.316 qpair failed and we were unable to recover it. 00:29:17.316 [2024-07-24 23:17:35.008874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.316 [2024-07-24 23:17:35.008902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.009323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.009350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.009787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.009816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.010042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.010069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.010485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.010512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.010936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.010964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.011394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.011421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.011836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.011865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.012133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.012160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.012571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.012598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.012844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.012873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.013347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.013374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.013788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.013816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.014320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.014347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.014793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.014820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.015243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.015269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.015688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.015714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.016162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.016190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.016504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.016533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.016961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.016990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.017240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.017266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.017379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.017410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.017725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.017768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.018004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.018031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.018448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.018474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.018891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.018919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.019191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.019218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.019646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.019674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.020120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.020148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.020586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.020613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.021029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.021058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.021330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.021358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.021674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.021705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.022120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.022148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.022582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.022608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.317 [2024-07-24 23:17:35.022734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.317 [2024-07-24 23:17:35.022770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.317 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.023250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.023277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.023496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.023523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.023905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.023932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.024254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.024280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.024684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.024711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.025169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.025197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.025629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.025655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.025958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.025990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.026417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.026444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.026862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.026890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.027336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.027362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.027776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.027804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.028250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.028277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.028693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.028719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.029219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.029248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.029666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.029693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.030112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.030572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.030598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.030933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.030961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.031277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.031303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.031540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.031566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.031967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.031994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.032472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.032499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.032928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.032956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.033368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.033394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.033539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.033576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.034007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.034036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.034449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.034476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.034941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.034970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.035272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.035586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.035613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.035994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.036022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.036444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.036471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.036791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.036818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.037255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.037282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.318 [2024-07-24 23:17:35.037732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.318 [2024-07-24 23:17:35.037770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.318 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.038174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.038200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.038619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.038646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.038860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.038888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.039283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.039312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.039803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.039830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.040155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.040185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.040459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.040486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.040916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.040944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.041202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.041228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.041546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.041573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.041808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.041841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.042102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.042130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.042552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.042578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.042955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.042983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.043435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.043462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.043883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.043911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.044343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.044370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.044642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.044669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.045080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.045108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.045296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.045324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.045776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.045804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.046206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.046233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.046666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.046692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.047039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.047067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.047478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.047914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.047941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.048333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.048360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.048777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.048806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.049099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.049126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.049540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.319 [2024-07-24 23:17:35.049572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.319 qpair failed and we were unable to recover it. 00:29:17.319 [2024-07-24 23:17:35.049901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.049930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.050213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.050240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.050539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.050568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.050985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.051014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.051451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.051874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.051901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.052300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.052327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.052741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.052778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.053270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.053296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.053712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.053740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.054191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.054586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.054612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.055085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.055114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.055533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.055560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.056002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.056031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.056393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.056421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.056862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.056892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.057310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.057339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.057771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.057800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.058108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.058135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.058556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.058582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.059009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.059037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.059511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.059539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.060141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.060240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.060774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.060812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.061272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.061302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.061738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.061790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.062227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.062254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.062688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.062716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.063162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.063192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.063670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.063696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.064248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.064345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.064863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.064901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.065227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.065260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.065693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.065721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.320 [2024-07-24 23:17:35.066042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.320 [2024-07-24 23:17:35.066071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.320 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.066499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.066526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.066851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.066882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.067297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.067325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.067779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.067819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.068050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.068077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.068388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.068415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.068645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.068673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.069142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.069171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.069608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.069635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.070030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.070058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.070469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.070497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.070738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.070778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.071247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.071274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.071446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.071474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.071881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.071909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.072338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.072365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.072676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.072709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.073166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.073195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.073513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.073543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.074000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.074029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.074457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.074485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.321 [2024-07-24 23:17:35.074807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.321 [2024-07-24 23:17:35.074837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.321 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.075293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.075322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.075748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.075788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.076121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.076569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.076597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.076823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.076852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.077134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.077160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.077552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.077580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.077953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.077982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.078298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.078326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.078764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.078793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.079084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.079111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.079536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.079563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.079983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.080011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.080443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.080469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.080901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.080929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.081170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.081196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.081498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.081525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.081878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.081906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.082338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.082365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.082664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.082694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.082974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.083002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.593 [2024-07-24 23:17:35.083430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.593 [2024-07-24 23:17:35.083464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.593 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.083854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.083882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.084178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.084205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.084483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.084510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.084935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.084963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.085402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.085430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.085856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.085885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.086337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.086364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.086818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.086847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.087287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.087315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.087592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.087619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.087892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.087920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.088236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.088269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.088708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.088735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.088989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.089017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.089322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.089351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.089669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.089698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.089938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.089967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.090412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.090439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.090859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.090888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.091338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.091368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.091814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.091843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.092164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.092195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.092610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.092637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.092944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.092973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.093226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.093254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.093485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.093991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.094021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.094148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.094173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.094679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.094707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.095054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.095083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.095502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.095532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.095804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.096040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.096066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.096360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.096388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.096811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.096839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.097205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.097233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.097624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.594 [2024-07-24 23:17:35.097651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.594 qpair failed and we were unable to recover it. 00:29:17.594 [2024-07-24 23:17:35.098132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.098160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.098578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.098605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.098982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.099430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.099459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.099727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.100201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.100229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.100657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.100684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.101080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.101109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.101411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.101438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.101857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.101885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.102302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.102329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.102574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.102601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.103046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.103075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.103584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.103611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.103792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.103824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.104143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.104174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.104589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.104617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.105025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.105053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.105503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.105530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.105965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.105993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.106280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.106307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.106578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.106605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.107060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.107089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.107534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.107561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.107979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.108007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.108261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.108288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.108692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.108720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.109037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.109069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.109390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.109417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.109741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.109796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.110106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.110133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.110576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.110602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.111020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.111049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.111324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.111351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.111599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.111627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.112053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.112082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.112361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.112389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.595 [2024-07-24 23:17:35.112721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.595 [2024-07-24 23:17:35.112748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.595 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.113208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.113235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.113620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.113648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.113776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.113803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.114208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.114236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.114653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.114686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.114957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.114985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.115424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.115452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.115891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.115921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.116150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.116178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.116618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.116647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.117109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.117137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.117651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.117678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.118149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.118177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.118588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.118614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.118859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.118889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.119333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.119360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.119776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.119805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.120264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.120292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.120613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.120642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.121191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.121221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.121530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.121557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.122012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.122042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.122470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.122497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.122938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.122966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.123376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.123404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.123848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.123876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.124317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.124345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.124737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.124775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.125184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.125212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.125649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.125677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.126128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.126156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.126579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.127038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.127067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.127381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.127413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.127832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.127863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.128348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.128375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.596 qpair failed and we were unable to recover it. 00:29:17.596 [2024-07-24 23:17:35.128793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.596 [2024-07-24 23:17:35.128822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.129135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.129163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.129617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.129645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.129883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.129911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.130314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.130341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.130793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.130823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.131176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.131203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.131686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.131712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.132169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.132204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.132617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.132643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.133112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.133380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.133407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.133853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.133881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.134033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.134060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.134399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.134426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.134862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.134890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.135335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.135362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.135772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.135801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.136254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.136280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.136713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.136740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.137160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.137188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.137296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.137320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.137586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.137613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.138046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.138074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.138494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.138521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.138892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.138920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.139327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.139354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.139783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.139812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.140096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.140122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.140540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.140568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.141003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.141031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.141449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.141475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.141938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.141967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.142390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.142416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.142855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.142883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.597 [2024-07-24 23:17:35.143305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.597 [2024-07-24 23:17:35.143332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.597 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.143696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.143723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.144052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.144081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.144517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.144543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.144965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.144993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.145414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.145442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.145891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.145920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.146364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.146391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.146683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.146713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.147153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.147181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.147600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.147626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.147880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.147907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.148150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.148177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.148594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.148627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.149111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.149139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.149580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.149607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.149883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.149912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.150224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.150251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.150547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.150892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.150921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.151182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.151209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.151651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.151678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.152111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.152139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.152578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.152605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.152713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.152739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.153029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.153057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.153378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.153406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.153803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.153833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.154246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.154273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.154640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.154668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.154941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.154970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.598 [2024-07-24 23:17:35.155288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.598 [2024-07-24 23:17:35.155315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.598 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.155745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.155785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.156082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.156110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.156477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.156504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.156821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.156850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.157297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.157324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.157689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.157715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.158104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.158133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.158588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.158618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.159039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.159069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.159490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.159517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.159939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.159968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.160407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.160435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.160854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.160882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.161208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.161240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.161662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.161689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.162125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.162154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.162438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.162467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.162970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.162999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.163439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.163466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.163855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.163884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.164333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.164359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.164617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.164651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.165080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.165107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.165348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.165378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.165622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.165649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.165770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.165796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.166100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.166131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.166635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.166664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.167091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.167120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.167546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.167573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.167830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.167858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.168285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.168312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.168736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.168777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.169143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.169170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.169607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.169634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.169874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.599 [2024-07-24 23:17:35.169902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.599 qpair failed and we were unable to recover it. 00:29:17.599 [2024-07-24 23:17:35.170347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.170373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.170500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.170525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.170999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.171028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.171470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.171765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.172234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.172261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.172680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.172708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.173040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.173070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.173557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.173585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.173830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.173859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.173970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.173997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.174448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.174475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.174734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.174775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.175259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.175286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.175711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.175737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.176128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.176155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.176433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.176463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.176883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.176913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.177367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.177394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.177825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.177853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.178304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.178331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.178782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.178810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.178927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.178952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.179432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.179459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.179880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.179909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.180270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.180304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.180712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.180739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.181203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.181231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.181643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.181670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.182106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.182134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.182588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.182615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.183046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.183074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.183525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.183553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.183860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.183889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.184307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.184334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.184660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.184687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.600 [2024-07-24 23:17:35.184967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.600 [2024-07-24 23:17:35.184995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.600 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.185433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.185460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.185899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.185927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.186345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.186373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.186692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.186718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.187235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.187264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.187570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.187597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.188053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.188082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.188351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.188378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.188720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.188747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.189189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.189216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.189710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.189736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.190183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.190211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.190586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.190612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.190969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.190998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.191278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.191305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.191717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.191750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.192076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.192103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.192559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.192586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.193105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.193581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.193608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.193904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.193931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.194356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.194742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.194790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.195204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.195231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.195505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.195532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.195991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.196019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.196451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.196478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.196904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.196933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.197388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.197415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.197794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.197823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.198225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.198251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.198670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.199041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.199069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.199376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.199407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.199651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.199677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.199985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.200014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.601 [2024-07-24 23:17:35.200322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.601 [2024-07-24 23:17:35.200350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.601 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.200778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.200805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.201259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.201286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.201701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.201728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.202025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.202055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.202477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.202504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.202788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.202819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.203088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.203115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.203556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.203583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.204010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.204038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.204369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.204396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.204877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.204907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.205279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.205306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.205559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.205585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.206031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.206058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.206478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.206740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.207183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.207211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.207650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.207676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.208084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.208118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.208591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.208619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.208942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.208972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.209390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.209418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.209854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.210301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.210328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.210748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.210785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.211238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.211265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.211796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.211827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.602 qpair failed and we were unable to recover it. 00:29:17.602 [2024-07-24 23:17:35.212147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.602 [2024-07-24 23:17:35.212174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.212622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.212648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.213064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.213093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.213539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.213568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.213978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.214007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.214465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.214492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.214747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.214788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.215252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.215279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.215529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.215557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.216048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.216076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.216314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.216341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.216570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.216597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.216900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.216928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.217381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.217408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.217893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.217922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.218359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.218722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.218748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.219220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.219248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.219636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.219664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.220020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.220050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.220293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.220320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.220747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.220786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.221198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.221225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.221632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.221658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.222112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.222140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.222606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.223082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.223111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.223403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.223430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.223855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.223883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.224243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.224708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.224736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.225191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.225225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.225479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.225508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.225958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.225986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.226238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.226265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.603 qpair failed and we were unable to recover it. 00:29:17.603 [2024-07-24 23:17:35.226692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.603 [2024-07-24 23:17:35.226719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.227138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.227167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.227559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.227586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.227862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.227892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.228143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.228170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.228613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.228640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.229077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.229105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.229381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.229407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.229909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.229937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.230363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.230391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.230674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.230701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.231158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.231188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.231606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.231634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.232094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.232123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.232400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.232427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.232665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.232692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.232893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.232921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.233427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.233454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.233912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.233939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.234187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.234214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.234545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.234571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.234851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.234878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.235302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.235329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.235777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.235807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.236260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.236287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.236712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.236739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.237186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.237214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.237611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.237638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.238070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.238098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.238542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.238570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.238820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.238848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.239278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.239305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.239721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.239747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.240166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.240192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.240471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.240499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.240933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.240961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.604 [2024-07-24 23:17:35.241240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.604 [2024-07-24 23:17:35.241273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.604 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.241669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.241695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.242027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.242056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.242311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.242339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.242773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.242801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.243207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.243234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.243659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.243686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.244123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.244151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.244652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.244680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.245087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.245115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.245497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.245524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.245882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.245910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.246364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.246391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.246709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.246739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.247197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.247226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.247484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.247511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.247775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.247805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.248122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.248149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.248577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.248604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.249052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.249080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.249505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.249532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.249852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.249883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.250302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.250328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.250769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.250797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.251088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.251114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.251532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.251560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.251983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.252012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.252264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.252292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.252720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.252747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.253060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.253087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.253505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.253532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.253959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.253987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.254431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.254458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.254905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.255046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.255072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.255557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.255583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.256013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.256042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.256319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.605 [2024-07-24 23:17:35.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.605 qpair failed and we were unable to recover it. 00:29:17.605 [2024-07-24 23:17:35.256797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.256825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.257272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.257300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.257744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.257790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.258240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.258267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.258685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.258713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.259045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.259073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.259201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.259226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.259498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.259524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.259800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.259829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.260288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.260316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.260689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.260718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.261160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.261188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.261463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.261490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.261893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.261921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.262177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.262204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.262607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.262636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.263055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.263083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.263569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.263595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.263889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.263921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.264195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.264222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.264544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.264575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.265007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.265035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.265468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.265496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.265779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.265807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.266225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.266252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.266727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.266765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.267216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.267242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.267663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.267690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.267805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.267832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.268295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.268322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.268567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.268595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.268949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.268977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.269399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.269425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.269686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.269713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.270050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.270078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.270412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.270439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.606 [2024-07-24 23:17:35.270772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.606 [2024-07-24 23:17:35.270800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.606 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.271256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.271282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.271609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.271640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.271939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.271970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.272474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.272501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.272908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.272937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.273353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.273386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.273796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.273824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.274250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.274280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.274718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.274745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.275079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.275107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.275439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.275466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.275773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.275805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.276261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.276288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.276768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.276798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.277271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.277298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.277735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.277775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.278183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.278211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.278493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.278522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.279033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.279131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.279625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.279661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.280106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.280137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.280575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.280604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.280965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.280995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.281387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.281414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.281710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.281737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.282194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.282223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.282607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.283054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.283083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.283526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.283554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.283829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.283859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.284274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.284301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.607 [2024-07-24 23:17:35.284735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.607 [2024-07-24 23:17:35.284775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.607 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.285054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.285082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.285375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.285401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.285729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.285773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.286214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.286243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.286518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.286544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.287010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.287039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.287487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.287514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.287947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.287976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.288191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.288218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.288526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.288552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.289048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.289076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.289326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.289353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.289771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.289800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.290221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.290255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.290731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.290771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.291213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.291239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.291471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.291497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.291901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.291931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.292199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.292229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.292542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.292568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.292843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.292871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.293264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.293290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.293709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.293735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.294165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.294192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.294644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.294671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.295088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.295117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.295538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.295999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.296028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.296346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.296375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.296841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.296870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.297313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.297340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.297750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.297789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.298232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.298258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.298641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.298669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.299070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.299097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.299354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.299380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.608 [2024-07-24 23:17:35.299679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.608 [2024-07-24 23:17:35.299705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.608 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.300051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.300079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.300476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.300502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.300948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.300977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.301418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.301447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.301861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.301889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.302142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.302168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.302583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.302609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.303114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.303142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.303639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.303667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.303987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.304024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.304456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.304483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.304932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.305381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.305410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.305825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.305853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.306252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.306278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.306717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.306744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.307174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.307209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.307661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.307690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.308167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.308195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.308491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.308522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.308943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.308972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.309392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.309862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.309892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.310163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.310191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.310611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.310637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.311099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.311128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.311411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.311438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.311676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.311704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.311998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.312029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.312482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.312511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.312912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.312941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.313357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.313384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.313809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.313838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.314257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.314284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.314722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.314750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.315001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.609 [2024-07-24 23:17:35.315029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.609 qpair failed and we were unable to recover it. 00:29:17.609 [2024-07-24 23:17:35.315399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.315429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.315672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.316027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.316056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.316499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.316527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.316812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.316841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.317272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.317300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.317713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.317740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.318228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.318256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.318525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.318551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.319010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.319038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.319279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.319306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.319719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.319746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.320125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.320153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.320574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.320600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.321055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.321085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.321512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.321540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.321784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.321813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.322096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.322122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.322580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.322606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.322890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.322918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.323360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.323394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.323804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.323833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.324267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.324294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.324735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.324777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.325184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.325211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.325481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.325508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.325822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.325850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.326175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.326205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.326501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.326532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.326951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.326979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.327424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.327451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.327886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.327914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.328153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.328398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.328425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.328696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.328724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.329200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.329227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.329649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.610 [2024-07-24 23:17:35.329676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.610 qpair failed and we were unable to recover it. 00:29:17.610 [2024-07-24 23:17:35.330130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.330160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.330432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.330459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.330910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.330939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.331346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.331373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.331796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.331824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.332064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.332092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.332366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.332393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.332818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.332848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.333313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.333340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.333792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.333821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.334060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.334089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.334377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.334403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.334855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.334884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.335238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.335265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.335540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.335570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.336005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.336033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.336495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.336521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.336959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.336987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.337353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.337380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.337634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.337660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.338070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.338099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.338532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.338559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.338847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.338875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.339162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.339196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.339612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.339640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.340124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.340153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.340581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.340608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.340889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.340917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.341168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.341195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.341618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.341645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.342001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.342035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.342460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.342487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.342906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.342934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.343377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.343405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.343848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.343876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.344295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.344322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.344742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.611 [2024-07-24 23:17:35.344804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.611 qpair failed and we were unable to recover it. 00:29:17.611 [2024-07-24 23:17:35.344988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.345027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.345541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.345567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.345823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.345852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.346171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.346198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.346553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.346579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.346873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.346903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.347350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.347377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.347629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.347655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.348056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.348084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.348451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.348913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.348941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.349368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.349395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.349856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.350309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.350337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.350766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.350795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.351104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.351131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.351374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.351399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.351840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.351868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.352149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.352176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.352599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.352627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.352883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.353322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.353348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.353776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.353805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.354307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.354333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.354610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.354636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.355061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.355090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.355540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.355573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.355979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.356006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.356438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.356464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.356908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.356936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.357212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.357242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.357480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.612 [2024-07-24 23:17:35.357506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.612 qpair failed and we were unable to recover it. 00:29:17.612 [2024-07-24 23:17:35.357966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.357994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.358178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.358205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.358662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.358688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.359129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.359158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.359437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.359464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.359791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.359821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.360307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.360334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.360581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.360608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.361093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.361121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.361398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.361426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.361851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.361878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.362312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.362339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.362783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.362811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.363102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.363128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.363550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.363577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.364020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.364047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.364287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.364313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.364728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.364774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.365217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.365244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.365659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.365685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.366151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.366180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.366438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.366465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.366897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.366926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.367261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.367287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.367578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.367608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.367855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.367883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.368321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.368348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.613 [2024-07-24 23:17:35.368778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.613 [2024-07-24 23:17:35.368805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.613 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.369234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.369263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.369707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.369733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.370169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.370196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.370469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.370496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.370855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.370884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.371322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.371349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.371723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.371769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.372262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.372290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.372528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.372555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.372675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.372699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.373074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.373103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.373523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.373550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.373950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.373978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.374395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.374422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.374839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.374868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.375304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.375673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.375705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.376173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.376203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.376656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.376683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.376961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.376989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.377412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.377439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.377884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.377913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.378243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.378269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.378687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.378714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.379038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.379067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.379511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.379538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.379968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.379997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.380431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.380458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.380792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.380820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.381099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.381126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.381577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.381604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.381844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.381873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.382312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.382339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.382783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.382813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.383281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.383308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.884 [2024-07-24 23:17:35.383587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.884 [2024-07-24 23:17:35.383614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.884 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.383856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.383886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.384207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.384235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.384658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.384685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.385139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.385166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.385440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.385467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.385909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.385936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.386354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.386382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.386610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.386636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.387158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.387185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.387605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.387631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.387738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.387782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.388260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.388288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.388721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.388748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.389171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.389198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.389520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.389550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.389993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.390021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.390461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.390487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.390918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.391337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.391365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.391731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.391777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.392240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.392267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.392548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.392575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.392961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.392990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.393255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.393282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.393700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.393727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.394147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.394176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.394592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.394618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.395045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.395075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.395316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.395343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.395774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.395802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.396182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.396208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.396669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.396696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.397124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.397152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.397573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.397600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.398010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.398037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.398439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.398465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.398884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.398912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.399337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.399375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.399798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.399826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.400118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.400145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.400603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.400629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.401028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.401056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.401300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.401327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.401760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.401788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.402031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.402057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.402352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.402377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.402608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.402635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.402950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.402981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.403384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.403411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.403630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.403656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.404051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.404078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.404481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.404508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.404940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.404967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.405078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.405103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.405393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.405421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.405823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.405850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.406087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.406114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.406549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.406574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.407052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.407080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.407495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.407521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.407943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.407971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.408399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.408426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.408795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.408822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.409098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.409127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.409441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.409470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.409841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.409869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.410155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.410182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.410584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.410610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.411012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.411041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.411460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.411487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.411767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.885 [2024-07-24 23:17:35.411795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.885 qpair failed and we were unable to recover it. 00:29:17.885 [2024-07-24 23:17:35.412095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.412123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.412542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.412568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.412997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.413025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.413488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.413515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.413921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.413949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.414368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.414396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.414634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.414666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.415175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.415204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.415326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.415352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.415675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.415702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.416105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.416133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.416543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.416570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.417058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.417087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.417315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.417342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.417736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.417773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.418180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.418206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.418613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.418639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.418913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.418942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.419168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.419194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.419625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.419651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.420098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.420126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.420586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.420613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.421061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.421089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.421490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.421517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.421923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.421951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.422294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.422321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.422764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.422793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.422913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.422938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.423269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.423296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.423714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.423740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.424163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.424191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.424553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.424579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.425011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.425039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.425456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.425483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.425937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.425964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.426074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.426099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.426485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.426512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.426921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.426949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.427317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.427344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.427699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.427726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.428156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.428183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.428602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.428629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.428897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.428925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.429339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.429365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.429857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.429885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.430280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.430307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.430744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.430788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.431208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.431235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.431479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.431505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.431723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.431750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.432185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.432212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.432530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.432556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.432973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.433001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.433281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.433311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.433723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.433750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.434267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.434294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.434557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.434584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.434998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.435320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.435351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.435594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.435621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.436090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.436118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.436536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.436563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.436834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.436861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.886 [2024-07-24 23:17:35.437258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.886 [2024-07-24 23:17:35.437285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.886 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.437688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.437715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.438075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.438102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.438415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.438445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.438686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.438713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.439131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.439158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.439413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.439440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.439876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.439903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.440310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.440337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.440746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.440785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.441235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.441262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.441694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.441720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.441997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.442025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.442256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.442283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.442580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.442606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.443114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.443142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.443433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.443462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.443697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.443723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.444132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.444160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.444576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.444604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.444851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.444879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.445430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.445456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.445695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.445974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.446007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.446519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.446546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6bf4000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.446952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.447001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.447293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.447306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.447536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.447546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.447860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.447873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.448238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.448249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.448534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.448545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.448756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.448766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.449291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.449330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.449570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.449582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.450055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.450095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.450505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.450518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.451010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.451051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.451482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.451495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.451748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.451765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.452119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.452129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.452359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.452368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.452645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.452654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.452971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.452982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.453378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.453388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.453740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.453750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.454028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.454037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.454322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.454332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.454700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.454710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.455103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.455113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.455346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.455355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.455713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.456049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.456059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.456291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.456300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.456677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.456686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.457104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.457114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.457349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.457358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.457760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.457770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.457837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.457846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Write completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 Read completed with error (sct=0, sc=8) 00:29:17.887 starting I/O failed 00:29:17.887 [2024-07-24 23:17:35.458572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.887 [2024-07-24 23:17:35.459169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.459258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c04000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.459629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.459668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c04000b90 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.460003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.887 [2024-07-24 23:17:35.460015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.887 qpair failed and we were unable to recover it. 00:29:17.887 [2024-07-24 23:17:35.460381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.460390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.460579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.460588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.460926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.460937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.461142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.461151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.461406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.461415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.461812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.461821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.462261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.462270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.462475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.462484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.462870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.462880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.463234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.463243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.463594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.463604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.463960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.463970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.464238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.464248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.464364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.464373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.464648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.464657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.464741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.464749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.464939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.464949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.465325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.465335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.465798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.465808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.466217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.466227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.466421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.466430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.466705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.466714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.467011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.467254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.467624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.467828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.467838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.468120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.468130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.468483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.468492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.468849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.468859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.469228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.469237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.469600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.469608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.469991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.470001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.470231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.470240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.470652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.470661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.471080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.471089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.471465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.471474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.471874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.471884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.472256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.472266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.472683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.472692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.473137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.473146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.473501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.473510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.473871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.473880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.474068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.474077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.474497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.474507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.474891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.474902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.475131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.475141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.475505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.475515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.475904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.475914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.476147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.476156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.476565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.476575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.476872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.476884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.477277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.477285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.477476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.477485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.477901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.477910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.478294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.478305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.478547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.478557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.478937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.478946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.479386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.479394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.479758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.479767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.480149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.480159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.480523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.480533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.480638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.480647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.480860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.480873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.481229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.481238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.481595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.481605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.481828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.481838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.482131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.482140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.482515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.482523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.482679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.482691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.482893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.482903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.483095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.483553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.483562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.888 [2024-07-24 23:17:35.483640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.888 [2024-07-24 23:17:35.483648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.888 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.483819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.483829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.484187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.484552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.484561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.484920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.484930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.485293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.485305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.485705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.485713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.486020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.486029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.486382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.486391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.486745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.486761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.487122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.487597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.487607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.889 [2024-07-24 23:17:35.487816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.488103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:17.889 [2024-07-24 23:17:35.488112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:17.889 [2024-07-24 23:17:35.488490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.488501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.889 [2024-07-24 23:17:35.488858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.488870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.889 [2024-07-24 23:17:35.489284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.489293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.489530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.489539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.489797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.490212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.490222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.490606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.490615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.491027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.491037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.491399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.491408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.491764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.491775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.492240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.492249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.492612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.492621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.492857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.492867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.493091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.493101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.493311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.493323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.493672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.493681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.494126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.494138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.494507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.494516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.494962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.494973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.495327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.495336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.495702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.495712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.496141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.496150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.496290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.496299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.496537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.496546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.496985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.496995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.497375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.497386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.497633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.497642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.498003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.498012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.498086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.498095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.498443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.498452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.498714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.498723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.499035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.499045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.499286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.499295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.499655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.499665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.500034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.500044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.500366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.500571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.500581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.500764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.500774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.501151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.501160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.501512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.501522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.501812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.501822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.502204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.502214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.502275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.502283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.502529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.502540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.502733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.502749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.502936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.502945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.503175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.503184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.503533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.503542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.503746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.503760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.504163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.504173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.504400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.889 [2024-07-24 23:17:35.504409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.889 qpair failed and we were unable to recover it. 00:29:17.889 [2024-07-24 23:17:35.504784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.504795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.505157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.505166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.505394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.505405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.505809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.505819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.506164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.506174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.506555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.506566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.506729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.506739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.507110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.507121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.507541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.507550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.507918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.507928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.508176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.508185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.508589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.508599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.508952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.508962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.509315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.509324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.509679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.509689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.510143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.510153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.510543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.510552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.510779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.511155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.511164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.511518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.511528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.511883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.511893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.512278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.512287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.512622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.512631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.512989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.512999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.513360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.513370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.513564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.513573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.513800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.513810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.514227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.514237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.514442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.514452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.514846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.514856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.515324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.515334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.515654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.515665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.515890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.515900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.516131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.516141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.516350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.516361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.516564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.516881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.516891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.517294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.517303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.517699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.517709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.518032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.518042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.518435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.518446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.518651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.518660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.519032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.519042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.519395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.519404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.519797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.519807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.520008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.520019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.520434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.520443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.520796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.520807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.521074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.521084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.521270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.521279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.521644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.521653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.521953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.521962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.522352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.522362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.522721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.522967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.522976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.523378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.523387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.523742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.523755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.523912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.523921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.524269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.524278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.524633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.524642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.525028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.525041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.525437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.525446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.525807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.526026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.526036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.526417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.526655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.526665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.526912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.526923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.527120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.527130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.527492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.527501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.890 [2024-07-24 23:17:35.527727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.527742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 [2024-07-24 23:17:35.527915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.527925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.890 qpair failed and we were unable to recover it. 00:29:17.890 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.890 [2024-07-24 23:17:35.528319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.890 [2024-07-24 23:17:35.528330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.891 [2024-07-24 23:17:35.528626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.528640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.891 [2024-07-24 23:17:35.529038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.529048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.529396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.529405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.529812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.529822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.530188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.530197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.530577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.530587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.530970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.530980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.531193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.531202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.531561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.531569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.531932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.531942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.532156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.532165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.532534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.532929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.532938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.533295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.533599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.533608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.533934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.533944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.534332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.534342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.534579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.534589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.534785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.534795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.534980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.534991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.535489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.535499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.535865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.535874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.535952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.535960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.536312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.536321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.536766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.536776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.537158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.537167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.537564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.537573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.537776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.537785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.538110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.538120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.538595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.538604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.538971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.538980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.539339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.539348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.539704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.539714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.539944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.539953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.540293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.540302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.540689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.540698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.541060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.541070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.541261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.541271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.541677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.541686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.542049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.542058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.542253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.542263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.542599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.542608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.542841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.542851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.543215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.543225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.543619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.543628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.544048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.544057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.544407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.544416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.544769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.545032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.545041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.545428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.545437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.545640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.545649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.545998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.546008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 Malloc0 00:29:17.891 [2024-07-24 23:17:35.546235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.546249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.546504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.546514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.546901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.546911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.891 [2024-07-24 23:17:35.547194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.547204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:17.891 [2024-07-24 23:17:35.547614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.547623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.891 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.891 [2024-07-24 23:17:35.548051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.548060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.548296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.548306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.548373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.548381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.548785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.548796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.548994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.549005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.549403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.549412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.549613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.891 [2024-07-24 23:17:35.549622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.891 qpair failed and we were unable to recover it. 00:29:17.891 [2024-07-24 23:17:35.549973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.549982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.550343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.550352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.550704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.551039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.551048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.551398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.551408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.551852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.551861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.552314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.552322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.552720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.552730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.553118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.553128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.553486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.553495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.553678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.892 [2024-07-24 23:17:35.553725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.553735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.554150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.554160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.554541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.554552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.554893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.554902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.554993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.555001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.555195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.555204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.555573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.555582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.555862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.555871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.556161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.556170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.556407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.556417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.556627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.556636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.557061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.557071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.557495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.557504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.557940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.557950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.558169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.558178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.558448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.558457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.558844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.558853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.559257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.559267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.559464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.559472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.559698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.559707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.560124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.560133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.560528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.560536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.560895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.560904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.561280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.561290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.561651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.561660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.562079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.562088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.562455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.562465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.892 [2024-07-24 23:17:35.562867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.562877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.892 [2024-07-24 23:17:35.563241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.563252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.892 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.892 [2024-07-24 23:17:35.563629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.563638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.564012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.564023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.564290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.564299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.564656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.564665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.564974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.564983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.565352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.565362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.565631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.565640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.566030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.566040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.566393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.566402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.566758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.566767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.567064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.567073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.567427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.567436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.567663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.567672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.567911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.567922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.568297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.568662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.568671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.568869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.568878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.569257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.569265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.569625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.569633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.569841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.569852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.570213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.570223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.570613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.570622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.571043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.571053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.571283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.571509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.571519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.571910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.571919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.572279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.572289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.572641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.572651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.573013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.573025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.573260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.573269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.573516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.573526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.573911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.892 [2024-07-24 23:17:35.573921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.892 qpair failed and we were unable to recover it. 00:29:17.892 [2024-07-24 23:17:35.574297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.574307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.893 [2024-07-24 23:17:35.574682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.574692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.575049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.575059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.893 [2024-07-24 23:17:35.575293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.575301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.893 [2024-07-24 23:17:35.575569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.575579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.893 [2024-07-24 23:17:35.575946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.575956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.576148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.576158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.576552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.576561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.576909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.576920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.577292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.577302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.577522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.577935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.577945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.578196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.578205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.578482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.578491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.578729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.578739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.579170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.579180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.579575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.579584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.579962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.579972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.580322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.580331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.580523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.580532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.580908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.580918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.580983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.580991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.581384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.581393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.581583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.581592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.581786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.581795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.582120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.582522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.582598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.582606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.583037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.583047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.583280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.583290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.583667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.583676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.584098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.584107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.584458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.584467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.584615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.584632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.585013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.585023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.585385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.585397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.585623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.585632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.585853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.585863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.586087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.586096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.586510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.586519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.893 [2024-07-24 23:17:35.586840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.586850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.893 [2024-07-24 23:17:35.587252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.587261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.893 [2024-07-24 23:17:35.587614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.587623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.587984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.587993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.588184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.588194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.588299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.588308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.588657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.588666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.588897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.588907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.589266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.589276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.589638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.589647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.590029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.590038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.590284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.590292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.590644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.590652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.591034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.591044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.591458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.591468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.591816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.591825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.592165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.592174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.592401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.592410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.592790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.592799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.593177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.593186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.593580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.593590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.593820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.893 [2024-07-24 23:17:35.593830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086aa0 with addr=10.0.0.2, port=4420 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 [2024-07-24 23:17:35.593970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.893 [2024-07-24 23:17:35.604522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.893 [2024-07-24 23:17:35.604612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.893 [2024-07-24 23:17:35.604631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.893 [2024-07-24 23:17:35.604639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.893 [2024-07-24 23:17:35.604645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.893 [2024-07-24 23:17:35.604666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.893 qpair failed and we were unable to recover it. 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.893 23:17:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1047121 00:29:17.893 [2024-07-24 23:17:35.614547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.893 [2024-07-24 23:17:35.614644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.893 [2024-07-24 23:17:35.614662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.893 [2024-07-24 23:17:35.614670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.893 [2024-07-24 23:17:35.614676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.894 [2024-07-24 23:17:35.614692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.894 qpair failed and we were unable to recover it. 00:29:17.894 [2024-07-24 23:17:35.624506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.894 [2024-07-24 23:17:35.624586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.894 [2024-07-24 23:17:35.624603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.894 [2024-07-24 23:17:35.624610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.894 [2024-07-24 23:17:35.624616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.894 [2024-07-24 23:17:35.624630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.894 qpair failed and we were unable to recover it. 00:29:17.894 [2024-07-24 23:17:35.634494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.894 [2024-07-24 23:17:35.634576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.894 [2024-07-24 23:17:35.634593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.894 [2024-07-24 23:17:35.634600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.894 [2024-07-24 23:17:35.634606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.894 [2024-07-24 23:17:35.634620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.894 qpair failed and we were unable to recover it. 00:29:17.894 [2024-07-24 23:17:35.644503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.894 [2024-07-24 23:17:35.644580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.894 [2024-07-24 23:17:35.644596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.894 [2024-07-24 23:17:35.644604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.894 [2024-07-24 23:17:35.644609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.894 [2024-07-24 23:17:35.644624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.894 qpair failed and we were unable to recover it. 00:29:17.894 [2024-07-24 23:17:35.654540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.894 [2024-07-24 23:17:35.654611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.894 [2024-07-24 23:17:35.654628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.894 [2024-07-24 23:17:35.654635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.894 [2024-07-24 23:17:35.654640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:17.894 [2024-07-24 23:17:35.654655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.894 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.664526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.664600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.664617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.664624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.664630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.664644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.674611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.674686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.674705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.674712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.674718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.674733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.684609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.684689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.684705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.684712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.684718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.684732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.694515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.694590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.694606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.694613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.694619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.694634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.704644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.704713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.704730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.704737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.704742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.704761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.714677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.714755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.714772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.714779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.714784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.714799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.724722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.724873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.724891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.724897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.724903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.724917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.156 [2024-07-24 23:17:35.734649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.156 [2024-07-24 23:17:35.734725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.156 [2024-07-24 23:17:35.734742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.156 [2024-07-24 23:17:35.734748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.156 [2024-07-24 23:17:35.734758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.156 [2024-07-24 23:17:35.734773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.156 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.744785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.744858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.744874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.744881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.744887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.744900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.754780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.754856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.754872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.754879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.754885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.754899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.764834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.764916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.764937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.764944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.764950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.764964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.774844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.774962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.774979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.774986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.774992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.775006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.784890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.784972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.784988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.784995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.785001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.785016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.794886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.794966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.794982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.794989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.794996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.795011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.805012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.805090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.805107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.805113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.805119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.805137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.814893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.814968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.814986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.814993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.814999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.815013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.825004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.825105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.825122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.825129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.825135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.825149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.835004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.835079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.835095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.835101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.835107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.835122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.844960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.845037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.845054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.845061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.845067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.845082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.855084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.855158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.855178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.855185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.855190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.855205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.865022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.865099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.157 [2024-07-24 23:17:35.865115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.157 [2024-07-24 23:17:35.865122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.157 [2024-07-24 23:17:35.865128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.157 [2024-07-24 23:17:35.865142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.157 qpair failed and we were unable to recover it. 00:29:18.157 [2024-07-24 23:17:35.875094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.157 [2024-07-24 23:17:35.875170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.875187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.875194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.875200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.875215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.885183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.885262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.885278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.885285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.885291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.885305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.895784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.895882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.895899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.895906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.895912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.895929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.905161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.905241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.905258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.905265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.905271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.905285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.915246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.915335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.915352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.915358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.915364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.915379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.925297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.925375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.925391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.925398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.925404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.925418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.158 [2024-07-24 23:17:35.935295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.158 [2024-07-24 23:17:35.935366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.158 [2024-07-24 23:17:35.935383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.158 [2024-07-24 23:17:35.935390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.158 [2024-07-24 23:17:35.935395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.158 [2024-07-24 23:17:35.935409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.158 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.945313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.945389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.945409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.945416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.945422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.945436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.955357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.955440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.955466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.955474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.955481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.955500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.965360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.965442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.965460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.965467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.965473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.965488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.975395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.975489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.975507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.975514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.975521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.975536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.985352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.985433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.985450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.985457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.985468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.985483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:35.995467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:35.995546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:35.995563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:35.995570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:35.995576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:35.995591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:36.005453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:36.005565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:36.005582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:36.005589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:36.005595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:36.005609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:36.015513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:36.015610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:36.015628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:36.015635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:36.015641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:36.015655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:36.025554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:36.025634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:36.025651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:36.025658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:36.025664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:36.025678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:36.035559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:36.035638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:36.035654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.420 [2024-07-24 23:17:36.035661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.420 [2024-07-24 23:17:36.035667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.420 [2024-07-24 23:17:36.035681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.420 qpair failed and we were unable to recover it. 00:29:18.420 [2024-07-24 23:17:36.045578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.420 [2024-07-24 23:17:36.045653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.420 [2024-07-24 23:17:36.045669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.045676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.045682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.045696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.055656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.055729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.055745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.055756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.055762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.055777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.065628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.065702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.065718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.065725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.065731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.065745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.075669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.075742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.075762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.075769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.075779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.075794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.085691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.085772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.085789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.085795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.085801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.085816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.095715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.095786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.095803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.095810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.095815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.095830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.105742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.105820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.105836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.105843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.105849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.105863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.115671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.115744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.115763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.115770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.115775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.115790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.125824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.125943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.125960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.125967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.125973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.125987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.135832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.135905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.135922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.135928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.135934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.135949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.145788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.145884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.145900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.145907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.145913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.145928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.155902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.156021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.156037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.156044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.156050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.156065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.165858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.165940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.165957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.421 [2024-07-24 23:17:36.165964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.421 [2024-07-24 23:17:36.165977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.421 [2024-07-24 23:17:36.165991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.421 qpair failed and we were unable to recover it. 00:29:18.421 [2024-07-24 23:17:36.175979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.421 [2024-07-24 23:17:36.176057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.421 [2024-07-24 23:17:36.176074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.422 [2024-07-24 23:17:36.176081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.422 [2024-07-24 23:17:36.176087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.422 [2024-07-24 23:17:36.176101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.422 qpair failed and we were unable to recover it. 00:29:18.422 [2024-07-24 23:17:36.185974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.422 [2024-07-24 23:17:36.186046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.422 [2024-07-24 23:17:36.186063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.422 [2024-07-24 23:17:36.186070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.422 [2024-07-24 23:17:36.186076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.422 [2024-07-24 23:17:36.186090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.422 qpair failed and we were unable to recover it. 00:29:18.422 [2024-07-24 23:17:36.196056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.422 [2024-07-24 23:17:36.196130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.422 [2024-07-24 23:17:36.196146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.422 [2024-07-24 23:17:36.196153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.422 [2024-07-24 23:17:36.196159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.422 [2024-07-24 23:17:36.196173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.422 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.206031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.206118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.206134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.206141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.206147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.206161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.215964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.216034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.216051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.216058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.216064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.216079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.225986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.226069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.226086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.226092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.226098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.226119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.236113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.236190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.236207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.236214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.236220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.236234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.246143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.246218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.246234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.246241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.246247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.246261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.256228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.256300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.256316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.256327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.256333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.256347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.266193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.266266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.266283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.266290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.266296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.266310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.276260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.276347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.276364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.276371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.276376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.276391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.286303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.286384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.286400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.286407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.286413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.286427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.296280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.296353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.296369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.296376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.296382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.296396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.306307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.306376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.306392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.306399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.684 [2024-07-24 23:17:36.306405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.684 [2024-07-24 23:17:36.306419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.684 qpair failed and we were unable to recover it. 00:29:18.684 [2024-07-24 23:17:36.316340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.684 [2024-07-24 23:17:36.316421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.684 [2024-07-24 23:17:36.316447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.684 [2024-07-24 23:17:36.316455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.316462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.316481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.326372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.326451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.326470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.326477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.326483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.326500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.336398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.336480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.336506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.336514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.336520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.336539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.346417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.346515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.346540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.346553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.346560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.346579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.356471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.356553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.356578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.356587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.356593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.356612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.366479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.366556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.366575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.366582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.366588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.366604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.376461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.376530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.376547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.376554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.376560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.376574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.386535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.386641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.386657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.386664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.386670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.386684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.396557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.396637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.396654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.396661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.396667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.396681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.406591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.406668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.406684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.406691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.406697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.406711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.416627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.416697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.416714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.416721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.416727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.416741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.426605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.685 [2024-07-24 23:17:36.426683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.685 [2024-07-24 23:17:36.426699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.685 [2024-07-24 23:17:36.426706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.685 [2024-07-24 23:17:36.426712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.685 [2024-07-24 23:17:36.426727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.685 qpair failed and we were unable to recover it. 00:29:18.685 [2024-07-24 23:17:36.436673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.686 [2024-07-24 23:17:36.436760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.686 [2024-07-24 23:17:36.436781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.686 [2024-07-24 23:17:36.436788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.686 [2024-07-24 23:17:36.436795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.686 [2024-07-24 23:17:36.436810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.686 qpair failed and we were unable to recover it. 00:29:18.686 [2024-07-24 23:17:36.446729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.686 [2024-07-24 23:17:36.446812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.686 [2024-07-24 23:17:36.446837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.686 [2024-07-24 23:17:36.446844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.686 [2024-07-24 23:17:36.446850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.686 [2024-07-24 23:17:36.446866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.686 qpair failed and we were unable to recover it. 00:29:18.686 [2024-07-24 23:17:36.456722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.686 [2024-07-24 23:17:36.456802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.686 [2024-07-24 23:17:36.456819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.686 [2024-07-24 23:17:36.456826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.686 [2024-07-24 23:17:36.456832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.686 [2024-07-24 23:17:36.456846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.686 qpair failed and we were unable to recover it. 00:29:18.686 [2024-07-24 23:17:36.466763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.686 [2024-07-24 23:17:36.466838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.686 [2024-07-24 23:17:36.466855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.686 [2024-07-24 23:17:36.466862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.686 [2024-07-24 23:17:36.466867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.686 [2024-07-24 23:17:36.466882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.686 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.476691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.476778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.476795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.476802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.476808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.476823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.486819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.486901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.486918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.486925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.486931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.486946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.496746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.496818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.496834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.496841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.496847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.496862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.506883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.506958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.506975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.506981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.506988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.507002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.516907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.516982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.516998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.517005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.517011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.517025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.526958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.527035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.527054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.527061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.527067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.527081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.536869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.536966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.536983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.536990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.536997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.537011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.547032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.547109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.547125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.547132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.547137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.547152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.556931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.557010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.557027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.557034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.557040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.557055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-07-24 23:17:36.567055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.948 [2024-07-24 23:17:36.567145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.948 [2024-07-24 23:17:36.567161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.948 [2024-07-24 23:17:36.567168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.948 [2024-07-24 23:17:36.567174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.948 [2024-07-24 23:17:36.567192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.577095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.577161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.577177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.577184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.577190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.577205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.587109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.587178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.587194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.587201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.587207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.587222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.597100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.597172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.597189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.597195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.597201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.597216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.607061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.607139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.607155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.607161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.607167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.607181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.617192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.617263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.617282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.617289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.617295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.617309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.627226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.627298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.627314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.627321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.627327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.627341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.637245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.637320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.637337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.637344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.637349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.637364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.647270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.647347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.647363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.647370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.647376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.647391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.657352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.657424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.657440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.657447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.657453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.657471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.667337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.667409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.667426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.667432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.667439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.667453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.677365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.677439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.677456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.677462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.677468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.677483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.687409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.687480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.687496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.687503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.687509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.687523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.697400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.697471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.949 [2024-07-24 23:17:36.697488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.949 [2024-07-24 23:17:36.697495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.949 [2024-07-24 23:17:36.697501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.949 [2024-07-24 23:17:36.697515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.949 qpair failed and we were unable to recover it. 00:29:18.949 [2024-07-24 23:17:36.707406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.949 [2024-07-24 23:17:36.707515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.950 [2024-07-24 23:17:36.707535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.950 [2024-07-24 23:17:36.707542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.950 [2024-07-24 23:17:36.707548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.950 [2024-07-24 23:17:36.707563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.950 qpair failed and we were unable to recover it. 00:29:18.950 [2024-07-24 23:17:36.717468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.950 [2024-07-24 23:17:36.717544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.950 [2024-07-24 23:17:36.717560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.950 [2024-07-24 23:17:36.717567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.950 [2024-07-24 23:17:36.717573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.950 [2024-07-24 23:17:36.717587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.950 qpair failed and we were unable to recover it. 00:29:18.950 [2024-07-24 23:17:36.727503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.950 [2024-07-24 23:17:36.727585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.950 [2024-07-24 23:17:36.727601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.950 [2024-07-24 23:17:36.727608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.950 [2024-07-24 23:17:36.727614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:18.950 [2024-07-24 23:17:36.727628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.950 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.737538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.737611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.737627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.737634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.737640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.212 [2024-07-24 23:17:36.737654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.212 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.747488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.747592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.747609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.747616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.747626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.212 [2024-07-24 23:17:36.747640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.212 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.757572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.757650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.757666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.757673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.757679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.212 [2024-07-24 23:17:36.757693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.212 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.767604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.767727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.767744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.767754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.767761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.212 [2024-07-24 23:17:36.767775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.212 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.777618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.777690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.777707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.777713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.777719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.212 [2024-07-24 23:17:36.777734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.212 qpair failed and we were unable to recover it. 00:29:19.212 [2024-07-24 23:17:36.787623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.212 [2024-07-24 23:17:36.787695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.212 [2024-07-24 23:17:36.787712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.212 [2024-07-24 23:17:36.787719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.212 [2024-07-24 23:17:36.787725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.787739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.797689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.797768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.797786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.797793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.797798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.797813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.807713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.807795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.807812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.807818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.807824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.807839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.817642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.817718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.817735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.817741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.817747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.817766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.827773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.827843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.827860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.827867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.827873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.827887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.837809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.837882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.837899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.837906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.837916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.837930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.847831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.847910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.847926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.847933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.847939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.847953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.857862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.857938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.857955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.857963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.857971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.857987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.867886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.867963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.867979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.867986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.867992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.868007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.877913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.877996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.878016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.878023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.878029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.878043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.887897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.887972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.887989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.887996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.888002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.888017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.897878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.897956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.897973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.897980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.897986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.898000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.908124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.908200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.908216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.908223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.908228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.908244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.917936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.213 [2024-07-24 23:17:36.918023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.213 [2024-07-24 23:17:36.918040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.213 [2024-07-24 23:17:36.918046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.213 [2024-07-24 23:17:36.918052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.213 [2024-07-24 23:17:36.918067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.213 qpair failed and we were unable to recover it. 00:29:19.213 [2024-07-24 23:17:36.928114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.928233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.928250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.928257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.928270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.928286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.938234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.938306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.938322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.938329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.938335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.938350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.948122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.948198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.948219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.948226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.948232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.948247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.958099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.958200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.958217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.958224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.958230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.958245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.968203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.968297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.968313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.968320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.968326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.968340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.978210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.978282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.978299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.978306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.978312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.978327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.214 [2024-07-24 23:17:36.988260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.214 [2024-07-24 23:17:36.988368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.214 [2024-07-24 23:17:36.988385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.214 [2024-07-24 23:17:36.988392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.214 [2024-07-24 23:17:36.988398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.214 [2024-07-24 23:17:36.988413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.214 qpair failed and we were unable to recover it. 00:29:19.476 [2024-07-24 23:17:36.998321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.476 [2024-07-24 23:17:36.998395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.476 [2024-07-24 23:17:36.998412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.476 [2024-07-24 23:17:36.998419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.476 [2024-07-24 23:17:36.998425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.476 [2024-07-24 23:17:36.998439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.476 qpair failed and we were unable to recover it. 00:29:19.476 [2024-07-24 23:17:37.008273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.476 [2024-07-24 23:17:37.008347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.476 [2024-07-24 23:17:37.008364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.476 [2024-07-24 23:17:37.008373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.476 [2024-07-24 23:17:37.008379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.476 [2024-07-24 23:17:37.008394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.476 qpair failed and we were unable to recover it. 00:29:19.476 [2024-07-24 23:17:37.018213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.476 [2024-07-24 23:17:37.018287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.476 [2024-07-24 23:17:37.018304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.476 [2024-07-24 23:17:37.018315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.476 [2024-07-24 23:17:37.018322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.476 [2024-07-24 23:17:37.018336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.476 qpair failed and we were unable to recover it. 00:29:19.476 [2024-07-24 23:17:37.028366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.476 [2024-07-24 23:17:37.028449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.028466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.028473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.028479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.028494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.038532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.038610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.038627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.038634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.038640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.038654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.048427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.048540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.048566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.048575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.048581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.048600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.058430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.058509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.058534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.058544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.058550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.058569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.068445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.068522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.068548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.068556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.068563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.068582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.078472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.078550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.078576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.078584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.078591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.078609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.088508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.088594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.088612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.088619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.088625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.088641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.098538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.098612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.098629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.098635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.098641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.098656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.108552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.108622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.108638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.108650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.108656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.108670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.118585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.118659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.118677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.118684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.118689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.118703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.128610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.128688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.128705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.128711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.477 [2024-07-24 23:17:37.128717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.477 [2024-07-24 23:17:37.128732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.477 qpair failed and we were unable to recover it. 00:29:19.477 [2024-07-24 23:17:37.138575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.477 [2024-07-24 23:17:37.138645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.477 [2024-07-24 23:17:37.138662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.477 [2024-07-24 23:17:37.138669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.138674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.138689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.148668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.148740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.148761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.148768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.148774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.148788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.158707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.158794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.158810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.158817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.158823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.158837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.168730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.168811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.168828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.168835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.168841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.168856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.178686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.178787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.178804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.178811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.178817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.178831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.188783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.188855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.188872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.188879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.188885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.188899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.198800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.198878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.198895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.198905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.198911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.198925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.208865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.208950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.208966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.208973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.208979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.208994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.218888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.218977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.218994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.219000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.219006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.219021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.228956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.229024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.229041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.229048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.229054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.229068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.238941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.239016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.239032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.239039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.239045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.239059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.248947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.249025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.249042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.249049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.249054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.249069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.478 [2024-07-24 23:17:37.258973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.478 [2024-07-24 23:17:37.259048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.478 [2024-07-24 23:17:37.259064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.478 [2024-07-24 23:17:37.259071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.478 [2024-07-24 23:17:37.259077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.478 [2024-07-24 23:17:37.259092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.478 qpair failed and we were unable to recover it. 00:29:19.741 [2024-07-24 23:17:37.268990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.741 [2024-07-24 23:17:37.269058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.741 [2024-07-24 23:17:37.269074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.741 [2024-07-24 23:17:37.269081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.741 [2024-07-24 23:17:37.269087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.741 [2024-07-24 23:17:37.269102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.741 qpair failed and we were unable to recover it. 00:29:19.741 [2024-07-24 23:17:37.279049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.741 [2024-07-24 23:17:37.279122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.741 [2024-07-24 23:17:37.279138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.741 [2024-07-24 23:17:37.279145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.741 [2024-07-24 23:17:37.279151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.741 [2024-07-24 23:17:37.279166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.741 qpair failed and we were unable to recover it. 00:29:19.741 [2024-07-24 23:17:37.289082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.741 [2024-07-24 23:17:37.289186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.741 [2024-07-24 23:17:37.289206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.741 [2024-07-24 23:17:37.289213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.741 [2024-07-24 23:17:37.289219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.741 [2024-07-24 23:17:37.289234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.741 qpair failed and we were unable to recover it. 00:29:19.741 [2024-07-24 23:17:37.299076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.741 [2024-07-24 23:17:37.299168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.741 [2024-07-24 23:17:37.299184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.741 [2024-07-24 23:17:37.299191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.741 [2024-07-24 23:17:37.299197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.741 [2024-07-24 23:17:37.299211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.741 qpair failed and we were unable to recover it. 00:29:19.741 [2024-07-24 23:17:37.309090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.741 [2024-07-24 23:17:37.309167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.741 [2024-07-24 23:17:37.309183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.741 [2024-07-24 23:17:37.309189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.741 [2024-07-24 23:17:37.309195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.741 [2024-07-24 23:17:37.309209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.319142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.319218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.319235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.319242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.319248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.319263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.329157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.329235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.329252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.329259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.329265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.329283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.339171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.339245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.339262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.339269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.339275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.339290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.349225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.349296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.349313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.349320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.349325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.349340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.359137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.359234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.359250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.359257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.359263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.359277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.369267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.369359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.369376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.369382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.369388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.369403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.379312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.379420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.379440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.379447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.379453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.379467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.389315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.389402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.389427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.389435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.389442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.389461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.399364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.399445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.399464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.399471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.399477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.399493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.409426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.409523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.409549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.409557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.409564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.409582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.419346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.419437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.419456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.419464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.419470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.419490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.429537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.429643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.742 [2024-07-24 23:17:37.429660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.742 [2024-07-24 23:17:37.429667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.742 [2024-07-24 23:17:37.429674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.742 [2024-07-24 23:17:37.429689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.742 qpair failed and we were unable to recover it. 00:29:19.742 [2024-07-24 23:17:37.439497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.742 [2024-07-24 23:17:37.439570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.439587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.439594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.439599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.439614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.449402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.449482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.449499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.449506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.449512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.449527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.459431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.459504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.459520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.459527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.459533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.459548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.469553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.469626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.469646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.469653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.469659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.469674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.479579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.479651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.479668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.479675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.479681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.479695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.489604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.489686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.489702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.489709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.489715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.489729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.499515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.499589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.499606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.499612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.499618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.499632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.509614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.509687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.509703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.509710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.509716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.509734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:19.743 [2024-07-24 23:17:37.519674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.743 [2024-07-24 23:17:37.519746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.743 [2024-07-24 23:17:37.519769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.743 [2024-07-24 23:17:37.519775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.743 [2024-07-24 23:17:37.519781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:19.743 [2024-07-24 23:17:37.519795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.743 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.529689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.529799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.529816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.529822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.529829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.529844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.539732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.539812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.539829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.539836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.539842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.539856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.549756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.549860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.549877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.549885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.549891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.549905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.559682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.559762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.559782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.559790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.559795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.559810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.569852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.569946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.569962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.569969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.569975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.569990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.579835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.006 [2024-07-24 23:17:37.579911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.006 [2024-07-24 23:17:37.579927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.006 [2024-07-24 23:17:37.579934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.006 [2024-07-24 23:17:37.579940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.006 [2024-07-24 23:17:37.579954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.006 qpair failed and we were unable to recover it. 00:29:20.006 [2024-07-24 23:17:37.589950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.590025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.590041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.590048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.590054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.590069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.599901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.599984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.600001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.600008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.600021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.600035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.609961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.610068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.610085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.610091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.610097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.610112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.619937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.620008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.620024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.620031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.620037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.620052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.629963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.630034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.630051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.630057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.630064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.630078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.640005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.640077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.640093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.640100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.640106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.640120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.649886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.649964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.649980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.649987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.649993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.650007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.660051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.660121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.660137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.660145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.660151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.660165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.670110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.670177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.670194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.670200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.670206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.670221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.679998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.680076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.680093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.680100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.680105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.680120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.690087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.690159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.690176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.690182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.690192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.690206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.700165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.700233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-24 23:17:37.700249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-24 23:17:37.700256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-24 23:17:37.700262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.007 [2024-07-24 23:17:37.700277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-24 23:17:37.710145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-24 23:17:37.710219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.710235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.710242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.710248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.710262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.720210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.720283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.720300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.720307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.720313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.720327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.730303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.730379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.730395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.730402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.730408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.730422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.740270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.740344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.740361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.740368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.740373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.740387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.750272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.750340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.750356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.750363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.750369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.750383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.760379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.760452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.760469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.760476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.760481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.760496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.770342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.770418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.770437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.770444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.770450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.770465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.780346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.780417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.780434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.780444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.780451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.780465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-24 23:17:37.790410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-24 23:17:37.790503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-24 23:17:37.790529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-24 23:17:37.790538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-24 23:17:37.790544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.008 [2024-07-24 23:17:37.790563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-24 23:17:37.800404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-24 23:17:37.800482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-24 23:17:37.800508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-24 23:17:37.800516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-24 23:17:37.800523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.270 [2024-07-24 23:17:37.800541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-24 23:17:37.810379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.810458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.810484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.810492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.810498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.810517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.820423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.820493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.820519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.820527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.820534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.820552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.830497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.830571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.830589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.830596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.830603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.830618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.840421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.840498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.840516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.840523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.840529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.840543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.850515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.850592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.850609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.850616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.850622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.850637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.860547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.860615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.860632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.860639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.860645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.860660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.870566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.870640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.870657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.870668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.870675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.870689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.880641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.880712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.880729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.880736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.880742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.880761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.890529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.890604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.890620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.890627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.890634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.890649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.900655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.900721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.900738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.900745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.900755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.900770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.910725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.910806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.910823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.910830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.910836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.910852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.920806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.920879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.920895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.920902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.920909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.920923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.930773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-24 23:17:37.930916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-24 23:17:37.930932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-24 23:17:37.930940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-24 23:17:37.930946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.271 [2024-07-24 23:17:37.930961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-24 23:17:37.940832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.940916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.940933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.940940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.940946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.940961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:37.950789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.950855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.950872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.950879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.950885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.950900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:37.960906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.961016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.961033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.961044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.961051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.961065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:37.970841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.970915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.970932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.970939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.970945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.970960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:37.980874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.980984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.981000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.981007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.981014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.981028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:37.990884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:37.990949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:37.990965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:37.990972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:37.990978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:37.990993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.000964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.001060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.001079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.001086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.001093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.001110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.010967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.011044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.011062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.011069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.011076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.011090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.020908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.020972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.020989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.020996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.021002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.021017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.030970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.031037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.031053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.031060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.031066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.031081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.041048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.041167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.041184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.041191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.041198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.041213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.272 [2024-07-24 23:17:38.051083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.272 [2024-07-24 23:17:38.051164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.272 [2024-07-24 23:17:38.051185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.272 [2024-07-24 23:17:38.051192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.272 [2024-07-24 23:17:38.051198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.272 [2024-07-24 23:17:38.051213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.272 qpair failed and we were unable to recover it. 00:29:20.534 [2024-07-24 23:17:38.060966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.534 [2024-07-24 23:17:38.061037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.534 [2024-07-24 23:17:38.061055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.534 [2024-07-24 23:17:38.061061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.534 [2024-07-24 23:17:38.061068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.534 [2024-07-24 23:17:38.061083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-07-24 23:17:38.071124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.534 [2024-07-24 23:17:38.071189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.534 [2024-07-24 23:17:38.071206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.534 [2024-07-24 23:17:38.071213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.534 [2024-07-24 23:17:38.071219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.534 [2024-07-24 23:17:38.071234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.534 [2024-07-24 23:17:38.081117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.534 [2024-07-24 23:17:38.081186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.534 [2024-07-24 23:17:38.081202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.534 [2024-07-24 23:17:38.081210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.534 [2024-07-24 23:17:38.081216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.534 [2024-07-24 23:17:38.081230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.534 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.091171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.091240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.091256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.091263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.091270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.091285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.101202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.101264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.101281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.101288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.101294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.101309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.111325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.111394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.111410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.111417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.111423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.111438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.121232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.121299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.121316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.121323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.121329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.121344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.131266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.131332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.131349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.131356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.131362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.131376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.141323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.141398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.141428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.141437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.141444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.141463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.151312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.151383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.151409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.151417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.151424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.151443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.161244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.161314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.161333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.161340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.161346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.161363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.171390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.171457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.171474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.171481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.171486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.171501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.181297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.181377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.181394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.181401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.181408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.181427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.191421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.191490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.191515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.191523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.191530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.191548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.201354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.201421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.201439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.201446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.201452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.201469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.535 [2024-07-24 23:17:38.211483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.535 [2024-07-24 23:17:38.211557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.535 [2024-07-24 23:17:38.211574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.535 [2024-07-24 23:17:38.211581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.535 [2024-07-24 23:17:38.211587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.535 [2024-07-24 23:17:38.211601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.535 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.221505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.221571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.221588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.221595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.221601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.221616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.231539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.231605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.231625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.231632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.231638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.231653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.241569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.241639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.241656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.241663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.241668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.241683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.251618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.251690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.251706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.251713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.251719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.251733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.261601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.261671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.261687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.261694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.261700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.261714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.271712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.271828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.271846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.271853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.271860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.271879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.281661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.281727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.281744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.281754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.281761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.281775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.291578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.291659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.291675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.291682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.291688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.291703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.301713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.301786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.301803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.301810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.301816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.301831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.536 [2024-07-24 23:17:38.311768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.536 [2024-07-24 23:17:38.311834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.536 [2024-07-24 23:17:38.311851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.536 [2024-07-24 23:17:38.311858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.536 [2024-07-24 23:17:38.311864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.536 [2024-07-24 23:17:38.311878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.536 qpair failed and we were unable to recover it. 00:29:20.798 [2024-07-24 23:17:38.321778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.798 [2024-07-24 23:17:38.321848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.798 [2024-07-24 23:17:38.321870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.798 [2024-07-24 23:17:38.321877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.798 [2024-07-24 23:17:38.321883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.798 [2024-07-24 23:17:38.321898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.798 qpair failed and we were unable to recover it. 00:29:20.798 [2024-07-24 23:17:38.331801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.798 [2024-07-24 23:17:38.331876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.798 [2024-07-24 23:17:38.331892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.798 [2024-07-24 23:17:38.331899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.798 [2024-07-24 23:17:38.331905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.798 [2024-07-24 23:17:38.331920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.798 qpair failed and we were unable to recover it. 00:29:20.798 [2024-07-24 23:17:38.341807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.798 [2024-07-24 23:17:38.341875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.798 [2024-07-24 23:17:38.341891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.798 [2024-07-24 23:17:38.341898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.798 [2024-07-24 23:17:38.341904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.798 [2024-07-24 23:17:38.341918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.798 qpair failed and we were unable to recover it. 00:29:20.798 [2024-07-24 23:17:38.351745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.798 [2024-07-24 23:17:38.351857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.798 [2024-07-24 23:17:38.351874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.798 [2024-07-24 23:17:38.351880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.798 [2024-07-24 23:17:38.351886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.798 [2024-07-24 23:17:38.351901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.798 qpair failed and we were unable to recover it. 00:29:20.798 [2024-07-24 23:17:38.361848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.798 [2024-07-24 23:17:38.361913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.361930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.361937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.361950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.361965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.371962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.372072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.372089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.372096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.372101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.372116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.381837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.381903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.381920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.381926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.381932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.381946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.392004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.392070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.392086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.392093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.392100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.392114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.401984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.402053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.402069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.402076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.402082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.402096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.412034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.412105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.412121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.412128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.412134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.412148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.422081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.422154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.422170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.422177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.422183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.422198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.432052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.432117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.432133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.432140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.432146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.432161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.442095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.442162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.442178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.442185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.442191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.442205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.451999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.452080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.452096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.452103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.452112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.452127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.462133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.462202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.462219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.462225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.462231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.462246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.472183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.472247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.472263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.472270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.472276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.472291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.482185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.482253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.482269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.482276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.482282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.799 [2024-07-24 23:17:38.482296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.799 qpair failed and we were unable to recover it. 00:29:20.799 [2024-07-24 23:17:38.492220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.799 [2024-07-24 23:17:38.492290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.799 [2024-07-24 23:17:38.492306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.799 [2024-07-24 23:17:38.492313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.799 [2024-07-24 23:17:38.492319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.492333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.502227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.502291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.502308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.502315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.502321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.502335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.512169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.512249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.512268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.512275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.512281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.512297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.522300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.522365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.522382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.522389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.522395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.522410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.532317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.532393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.532410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.532416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.532422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.532437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.542327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.542401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.542427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.542435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.542446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.542465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.552383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.552456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.552482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.552490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.552497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.552515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.562437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.562506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.562524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.562531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.562537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.562553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.572492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.572609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.572626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.572633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.572639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.572654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:20.800 [2024-07-24 23:17:38.582472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.800 [2024-07-24 23:17:38.582539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.800 [2024-07-24 23:17:38.582556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.800 [2024-07-24 23:17:38.582563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.800 [2024-07-24 23:17:38.582569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:20.800 [2024-07-24 23:17:38.582583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.800 qpair failed and we were unable to recover it. 00:29:21.063 [2024-07-24 23:17:38.592482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.063 [2024-07-24 23:17:38.592549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.063 [2024-07-24 23:17:38.592566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.063 [2024-07-24 23:17:38.592573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.063 [2024-07-24 23:17:38.592579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.063 [2024-07-24 23:17:38.592593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.602535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.602600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.602617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.602624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.602630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.602644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.612561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.612631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.612647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.612654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.612660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.612674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.622570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.622630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.622646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.622653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.622659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.622673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.632605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.632696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.632712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.632723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.632729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.632744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.642618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.642682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.642699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.642706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.642712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.642726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.652646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.652716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.652733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.652740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.652745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.652764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.662696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.662763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.662780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.662787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.662793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.662807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.672598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.672667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.672683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.672690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.672696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.672710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.682687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.682756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.682773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.682780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.682786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.682800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.692756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.692829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.692845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.692852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.692858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.692873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.702776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.702840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.702857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.702863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.702869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.702884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.712801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.712872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.712889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.712896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.712901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.712916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.064 [2024-07-24 23:17:38.722851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.064 [2024-07-24 23:17:38.722951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.064 [2024-07-24 23:17:38.722967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.064 [2024-07-24 23:17:38.722977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.064 [2024-07-24 23:17:38.722983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.064 [2024-07-24 23:17:38.722998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.064 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.732886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.732957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.732974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.732980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.732986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.733001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.742886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.742957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.742974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.742981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.742987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.743001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.752952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.753021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.753037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.753044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.753050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.753064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.762940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.763009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.763026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.763033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.763039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.763053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.772997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.773067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.773083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.773090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.773096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.773111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.782999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.783065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.783081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.783088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.783094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.783108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.793039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.793106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.793122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.793129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.793134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.793149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.803081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.803152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.803168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.803175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.803181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.803195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.813126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.813198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.813218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.813225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.813231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.813245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.823000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.823076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.823093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.823100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.823106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.823120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.833140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.833206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.833222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.833229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.833235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.833249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.065 [2024-07-24 23:17:38.843199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.065 [2024-07-24 23:17:38.843267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.065 [2024-07-24 23:17:38.843283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.065 [2024-07-24 23:17:38.843290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.065 [2024-07-24 23:17:38.843296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.065 [2024-07-24 23:17:38.843310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.065 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.853217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.328 [2024-07-24 23:17:38.853293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.328 [2024-07-24 23:17:38.853309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.328 [2024-07-24 23:17:38.853316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.328 [2024-07-24 23:17:38.853322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.328 [2024-07-24 23:17:38.853337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.328 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.863217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.328 [2024-07-24 23:17:38.863287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.328 [2024-07-24 23:17:38.863304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.328 [2024-07-24 23:17:38.863311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.328 [2024-07-24 23:17:38.863317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.328 [2024-07-24 23:17:38.863331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.328 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.873261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.328 [2024-07-24 23:17:38.873326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.328 [2024-07-24 23:17:38.873343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.328 [2024-07-24 23:17:38.873349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.328 [2024-07-24 23:17:38.873355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.328 [2024-07-24 23:17:38.873370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.328 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.883291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.328 [2024-07-24 23:17:38.883355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.328 [2024-07-24 23:17:38.883372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.328 [2024-07-24 23:17:38.883378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.328 [2024-07-24 23:17:38.883384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.328 [2024-07-24 23:17:38.883399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.328 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.893326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.328 [2024-07-24 23:17:38.893404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.328 [2024-07-24 23:17:38.893429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.328 [2024-07-24 23:17:38.893437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.328 [2024-07-24 23:17:38.893444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.328 [2024-07-24 23:17:38.893464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.328 qpair failed and we were unable to recover it. 00:29:21.328 [2024-07-24 23:17:38.903312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.903381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.903411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.903420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.903427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.903447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.913372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.913444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.913470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.913478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.913485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.913504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.923426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.923506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.923532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.923540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.923547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.923565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.933386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.933461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.933479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.933486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.933492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.933507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.943436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.943508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.943525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.943532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.943538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.943557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.953453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.953519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.953536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.953543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.953549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.953564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.963481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.963551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.963568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.963575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.963581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.963595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.973507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.973577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.973594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.973601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.973607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.973622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.983547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.983615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.983632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.983639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.983645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.983659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:38.993589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:38.993687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:38.993707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:38.993714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:38.993720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:38.993735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:39.003570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:39.003684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:39.003701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:39.003708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:39.003714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:39.003728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:39.013604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:39.013676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:39.013693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:39.013700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:39.013706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:39.013721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:39.023635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:39.023701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.329 [2024-07-24 23:17:39.023718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.329 [2024-07-24 23:17:39.023725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.329 [2024-07-24 23:17:39.023731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.329 [2024-07-24 23:17:39.023745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.329 qpair failed and we were unable to recover it. 00:29:21.329 [2024-07-24 23:17:39.033681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.329 [2024-07-24 23:17:39.033744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.033765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.033772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.033778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.033796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.043582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.043645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.043662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.043669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.043675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.043689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.053709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.053798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.053815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.053823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.053829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.053844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.063624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.063687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.063704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.063711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.063717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.063731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.073657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.073737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.073758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.073766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.073772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.073786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.083809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.083878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.083898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.083905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.083911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.083925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.093833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.093907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.093923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.093930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.093935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.093950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.330 [2024-07-24 23:17:39.103841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.330 [2024-07-24 23:17:39.103915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.330 [2024-07-24 23:17:39.103932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.330 [2024-07-24 23:17:39.103939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.330 [2024-07-24 23:17:39.103945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.330 [2024-07-24 23:17:39.103959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.330 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.113776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.113846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.113863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.113869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.113876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.113890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.123900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.123963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.123980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.123987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.124000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.124015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.133936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.134007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.134024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.134031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.134037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.134051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.143984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.144045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.144063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.144069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.144075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.144089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.154006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.154073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.154090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.154097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.154103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.154117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.164041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.164121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.164138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.164145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.164151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.164165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.174034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.174098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.174115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.174122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.174128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.174142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.184055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.184125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.184142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.184149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.184155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.184169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.194101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.194166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.194182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.194189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.194195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.194209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.204110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.204176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.204192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.204199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.204204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.204219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.214150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.214210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.214227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.214233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.593 [2024-07-24 23:17:39.214243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.593 [2024-07-24 23:17:39.214257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.593 qpair failed and we were unable to recover it. 00:29:21.593 [2024-07-24 23:17:39.224168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.593 [2024-07-24 23:17:39.224234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.593 [2024-07-24 23:17:39.224250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.593 [2024-07-24 23:17:39.224257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.224263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.224277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.234213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.234278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.234295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.234301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.234308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.234322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.244276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.244343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.244360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.244366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.244372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.244386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.254171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.254249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.254274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.254283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.254289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.254308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.264357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.264428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.264446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.264453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.264460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.264475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.274216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.274289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.274305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.274312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.274318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.274333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.284362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.284428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.284444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.284451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.284457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.284471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.294270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.294339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.294355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.294362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.294368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.294383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.304467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.304547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.304573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.304582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.304592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.304612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.314350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.314424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.314450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.314459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.314465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.314484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.324519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.324593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.324619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.324627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.324634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.324653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.334505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.334574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.334592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.334599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.334605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.334621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.344522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.344597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.344615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.344621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.344627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.344642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.594 [2024-07-24 23:17:39.354556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.594 [2024-07-24 23:17:39.354621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.594 [2024-07-24 23:17:39.354638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.594 [2024-07-24 23:17:39.354644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.594 [2024-07-24 23:17:39.354651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.594 [2024-07-24 23:17:39.354665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.594 qpair failed and we were unable to recover it. 00:29:21.595 [2024-07-24 23:17:39.364582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.595 [2024-07-24 23:17:39.364692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.595 [2024-07-24 23:17:39.364709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.595 [2024-07-24 23:17:39.364715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.595 [2024-07-24 23:17:39.364722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.595 [2024-07-24 23:17:39.364736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.595 qpair failed and we were unable to recover it. 00:29:21.595 [2024-07-24 23:17:39.374633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.595 [2024-07-24 23:17:39.374714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.595 [2024-07-24 23:17:39.374731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.595 [2024-07-24 23:17:39.374738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.595 [2024-07-24 23:17:39.374744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.595 [2024-07-24 23:17:39.374763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.595 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.384513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.384580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.384597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.384604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.384610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.384625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.394677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.394787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.394804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.394814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.394820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.394835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.404560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.404626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.404642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.404649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.404655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.404669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.414713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.414788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.414804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.414811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.414817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.414831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.424739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.424810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.424827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.424834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.424840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.424854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.434730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.434878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.434895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.434901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.434907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.434921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.444661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.444727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.444743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.444754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.444761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.444775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.454817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.454889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.454905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.454912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.454917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.454932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.464823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.464890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.464906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.464913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.464919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.464934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.474747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.474822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.857 [2024-07-24 23:17:39.474838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.857 [2024-07-24 23:17:39.474845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.857 [2024-07-24 23:17:39.474852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.857 [2024-07-24 23:17:39.474866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.857 qpair failed and we were unable to recover it. 00:29:21.857 [2024-07-24 23:17:39.484898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.857 [2024-07-24 23:17:39.484963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.484980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.484990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.484996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.485010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.494799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.494871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.494887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.494894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.494900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.494914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.504835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.504900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.504917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.504925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.504930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.504945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.514947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.515027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.515044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.515050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.515056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.515071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.524985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.525050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.525066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.525073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.525079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.525093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.534956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.535059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.535075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.535082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.535088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.535102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.545034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.545109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.545125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.545132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.545138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.545153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.555084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.555145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.555161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.555167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.555173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.555187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.565128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.565219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.565235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.565242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.565248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.565263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.575034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.575101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.575117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.575127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.575133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.575147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.585144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.585207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.585223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.585230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.585236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.585250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.595179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.595244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.595260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.595267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.595273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.595288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.605085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.605150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.605166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.605173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.605179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.858 [2024-07-24 23:17:39.605193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.858 qpair failed and we were unable to recover it. 00:29:21.858 [2024-07-24 23:17:39.615229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.858 [2024-07-24 23:17:39.615301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.858 [2024-07-24 23:17:39.615317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.858 [2024-07-24 23:17:39.615324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.858 [2024-07-24 23:17:39.615330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.859 [2024-07-24 23:17:39.615345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.859 qpair failed and we were unable to recover it. 00:29:21.859 [2024-07-24 23:17:39.625133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.859 [2024-07-24 23:17:39.625217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.859 [2024-07-24 23:17:39.625233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.859 [2024-07-24 23:17:39.625240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.859 [2024-07-24 23:17:39.625246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.859 [2024-07-24 23:17:39.625260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.859 qpair failed and we were unable to recover it. 00:29:21.859 [2024-07-24 23:17:39.635275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.859 [2024-07-24 23:17:39.635335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.859 [2024-07-24 23:17:39.635351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.859 [2024-07-24 23:17:39.635358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.859 [2024-07-24 23:17:39.635364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:21.859 [2024-07-24 23:17:39.635378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.859 qpair failed and we were unable to recover it. 00:29:22.121 [2024-07-24 23:17:39.645367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.121 [2024-07-24 23:17:39.645452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.121 [2024-07-24 23:17:39.645468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.121 [2024-07-24 23:17:39.645475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.121 [2024-07-24 23:17:39.645481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.121 [2024-07-24 23:17:39.645496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-07-24 23:17:39.655331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.121 [2024-07-24 23:17:39.655398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.121 [2024-07-24 23:17:39.655415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.121 [2024-07-24 23:17:39.655422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.121 [2024-07-24 23:17:39.655428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.121 [2024-07-24 23:17:39.655442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-07-24 23:17:39.665362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.121 [2024-07-24 23:17:39.665429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.121 [2024-07-24 23:17:39.665449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.121 [2024-07-24 23:17:39.665456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.121 [2024-07-24 23:17:39.665461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.121 [2024-07-24 23:17:39.665476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-07-24 23:17:39.675450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.675518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.675534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.675541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.675547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.675562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.685309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.685377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.685393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.685400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.685406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.685420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.695449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.695517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.695534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.695541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.695546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.695561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.705481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.705553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.705579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.705588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.705594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.705618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.715542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.715619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.715637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.715644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.715650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.715666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.725433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.725503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.725520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.725527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.725533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.725548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.735443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.735512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.735528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.735535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.735541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.735555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.745473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.745537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.745555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.745562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.745568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.745582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.755626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.755692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.755713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.755720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.755726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.755740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.765625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.765697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.765714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.765721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.765727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.765741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.775666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.775735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.775755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.775762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.775768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.775783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.785683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.122 [2024-07-24 23:17:39.785757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.122 [2024-07-24 23:17:39.785773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.122 [2024-07-24 23:17:39.785780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.122 [2024-07-24 23:17:39.785786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.122 [2024-07-24 23:17:39.785801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-07-24 23:17:39.795635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.795733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.795755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.795762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.795768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.795787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.805737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.805806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.805823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.805830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.805836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.805850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.815777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.815851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.815867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.815874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.815880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.815895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.825785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.825863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.825880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.825886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.825892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.825906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.835815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.835881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.835897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.835904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.835910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.835925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.845919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.845987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.846007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.846013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.846019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.846034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.855883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.855950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.855966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.855973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.855979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.855994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.865787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.865862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.865879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.865886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.865892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.865906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.875881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.875947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.875964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.875971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.875977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.875992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.885998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.886066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.886082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.886089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.886095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.886116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.895985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.896045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.896062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.896069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.896075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.896089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-07-24 23:17:39.906028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.123 [2024-07-24 23:17:39.906096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.123 [2024-07-24 23:17:39.906113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.123 [2024-07-24 23:17:39.906120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.123 [2024-07-24 23:17:39.906126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.123 [2024-07-24 23:17:39.906140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.385 [2024-07-24 23:17:39.915917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.385 [2024-07-24 23:17:39.915994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.385 [2024-07-24 23:17:39.916011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.385 [2024-07-24 23:17:39.916018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.385 [2024-07-24 23:17:39.916024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.385 [2024-07-24 23:17:39.916038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.385 qpair failed and we were unable to recover it. 00:29:22.385 [2024-07-24 23:17:39.926075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.385 [2024-07-24 23:17:39.926144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.385 [2024-07-24 23:17:39.926160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.385 [2024-07-24 23:17:39.926167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.385 [2024-07-24 23:17:39.926173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.926187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.936033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.936102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.936122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.936129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.936135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.936149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.945992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.946073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.946089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.946096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.946102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.946117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.956158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.956223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.956239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.956246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.956252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.956267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.966162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.966230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.966247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.966253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.966259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.966273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.976198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.976267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.976284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.976291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.976300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.976314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.986247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.986314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.986331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.986338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.986343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.986357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:39.996227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:39.996299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:39.996315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:39.996322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:39.996327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:39.996341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.006266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.006334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.006352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.006359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.006365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.006379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.016296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.016367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.016384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.016391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.016396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.016411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.026324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.026396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.026413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.026420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.026426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.026441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.036293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.036467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.036501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.036515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.036534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.036566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.046377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.046443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.046462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.046469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.046476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.046492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.386 [2024-07-24 23:17:40.056425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.386 [2024-07-24 23:17:40.056493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.386 [2024-07-24 23:17:40.056510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.386 [2024-07-24 23:17:40.056517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.386 [2024-07-24 23:17:40.056524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.386 [2024-07-24 23:17:40.056538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.386 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.066434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.066497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.066514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.066521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.066532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.066547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.076464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.076530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.076547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.076554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.076560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.076575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.086493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.086565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.086582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.086589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.086595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.086610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.096521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.096580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.096597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.096604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.096610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.096625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.106564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.106678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.106697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.106704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.106710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.106726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.116466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.116541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.116557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.116564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.116570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.116585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.126498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.126568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.126585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.126592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.126598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.126613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.136630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.136698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.136715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.136722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.136728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.136742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.146638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.146704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.146721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.146728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.146734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.146749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.156681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.156746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.156766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.156776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.156782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.156798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.387 [2024-07-24 23:17:40.166719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.387 [2024-07-24 23:17:40.166789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.387 [2024-07-24 23:17:40.166807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.387 [2024-07-24 23:17:40.166813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.387 [2024-07-24 23:17:40.166819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.387 [2024-07-24 23:17:40.166834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.387 qpair failed and we were unable to recover it. 00:29:22.649 [2024-07-24 23:17:40.176749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.649 [2024-07-24 23:17:40.176820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.649 [2024-07-24 23:17:40.176837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.649 [2024-07-24 23:17:40.176844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.649 [2024-07-24 23:17:40.176850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.649 [2024-07-24 23:17:40.176865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.649 qpair failed and we were unable to recover it. 00:29:22.649 [2024-07-24 23:17:40.186762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.649 [2024-07-24 23:17:40.186829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.649 [2024-07-24 23:17:40.186846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.649 [2024-07-24 23:17:40.186853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.649 [2024-07-24 23:17:40.186859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.649 [2024-07-24 23:17:40.186874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.649 qpair failed and we were unable to recover it. 00:29:22.649 [2024-07-24 23:17:40.196819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.649 [2024-07-24 23:17:40.196884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.649 [2024-07-24 23:17:40.196901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.649 [2024-07-24 23:17:40.196908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.649 [2024-07-24 23:17:40.196914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.649 [2024-07-24 23:17:40.196929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.649 qpair failed and we were unable to recover it. 00:29:22.649 [2024-07-24 23:17:40.206775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.649 [2024-07-24 23:17:40.206839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.649 [2024-07-24 23:17:40.206856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.649 [2024-07-24 23:17:40.206863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.649 [2024-07-24 23:17:40.206870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.649 [2024-07-24 23:17:40.206884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.216848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.216922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.216939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.216946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.216952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.216966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.226881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.226948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.226965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.226973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.226979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.226994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.236783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.236851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.236867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.236874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.236880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.236896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.246931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.247017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.247034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.247045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.247051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.247066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.256941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.257010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.257027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.257034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.257040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.257055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.266999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.267065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.267081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.267089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.267095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.267109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.277075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.277151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.277168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.277175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.277181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.277196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.287037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.287101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.287117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.287124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.287130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.287145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.296947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.297020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.297037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.297043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.297049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.297064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.307130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.307199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.307215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.307222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.307228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.307243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.317103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.317242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.317261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.317268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.317274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.317338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.327139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.327207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.327224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.327231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.327237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.327252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.337159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.650 [2024-07-24 23:17:40.337237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.650 [2024-07-24 23:17:40.337253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.650 [2024-07-24 23:17:40.337264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.650 [2024-07-24 23:17:40.337270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.650 [2024-07-24 23:17:40.337285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.650 qpair failed and we were unable to recover it. 00:29:22.650 [2024-07-24 23:17:40.347078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.347147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.347163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.347170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.347176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.347190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.357217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.357280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.357297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.357304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.357310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.357324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.367237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.367303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.367320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.367327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.367333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.367347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.377164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.377228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.377245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.377252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.377258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.377273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.387351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.387416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.387433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.387440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.387446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.387460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.397219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.397287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.397304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.397311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.397316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.397331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.407405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.407479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.407496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.407504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.407509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.407524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.417407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.417482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.417507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.417516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.417523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.417542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.651 [2024-07-24 23:17:40.427469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.651 [2024-07-24 23:17:40.427544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.651 [2024-07-24 23:17:40.427574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.651 [2024-07-24 23:17:40.427583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.651 [2024-07-24 23:17:40.427589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.651 [2024-07-24 23:17:40.427609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.651 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.437439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.437579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.437605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.437613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.437620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.437639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.447360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.447429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.447447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.447454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.447461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.447477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.457493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.457564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.457581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.457590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.457596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.457610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.467521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.467587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.467604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.467612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.467618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.467637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.477544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.477613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.477630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.477637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.477643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.477658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.487579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.487645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.487662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.487669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.487675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.487689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.497612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.497686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.497706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.497713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.497719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.497734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.507636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.507701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.507718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.507725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.507731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.507745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.517608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.517673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.517694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.517701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.517708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.517723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.527683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.527754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.527771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.527779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.527785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.527799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.914 qpair failed and we were unable to recover it. 00:29:22.914 [2024-07-24 23:17:40.537743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.914 [2024-07-24 23:17:40.537892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.914 [2024-07-24 23:17:40.537909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.914 [2024-07-24 23:17:40.537916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.914 [2024-07-24 23:17:40.537922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.914 [2024-07-24 23:17:40.537936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.547708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.547774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.547791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.547798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.547804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.547818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.557777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.557842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.557859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.557866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.557872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.557891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.567832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.567898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.567915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.567922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.567928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.567943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.577825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.577898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.577914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.577922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.577928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.577943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.587741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.587812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.587828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.587835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.587841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.587857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.597864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.597932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.597949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.597956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.597962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.597976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.607960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.608027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.608046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.608053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.608059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.608074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.617917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.617987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.618003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.618010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.618016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.618031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.627975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.628046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.628063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.628070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.628076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.628090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.637979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.638045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.638062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.638068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.638074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.638089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.647994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.648060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.648076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.648083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.648089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.648110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.657921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.657994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.658011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.658017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.658023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.658037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.668049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.668113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.915 [2024-07-24 23:17:40.668130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.915 [2024-07-24 23:17:40.668136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.915 [2024-07-24 23:17:40.668142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.915 [2024-07-24 23:17:40.668156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.915 qpair failed and we were unable to recover it. 00:29:22.915 [2024-07-24 23:17:40.678075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.915 [2024-07-24 23:17:40.678142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.916 [2024-07-24 23:17:40.678158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.916 [2024-07-24 23:17:40.678165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.916 [2024-07-24 23:17:40.678171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.916 [2024-07-24 23:17:40.678186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.916 qpair failed and we were unable to recover it. 00:29:22.916 [2024-07-24 23:17:40.688062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.916 [2024-07-24 23:17:40.688129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.916 [2024-07-24 23:17:40.688146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.916 [2024-07-24 23:17:40.688153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.916 [2024-07-24 23:17:40.688159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.916 [2024-07-24 23:17:40.688173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.916 qpair failed and we were unable to recover it. 00:29:22.916 [2024-07-24 23:17:40.698158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.916 [2024-07-24 23:17:40.698232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.916 [2024-07-24 23:17:40.698252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.916 [2024-07-24 23:17:40.698259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.916 [2024-07-24 23:17:40.698265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:22.916 [2024-07-24 23:17:40.698280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.916 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.708170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.708264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.708280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.708288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.708294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.708308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.718199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.718264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.718281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.718288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.718293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.718308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.728098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.728167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.728184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.728191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.728197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.728210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.738204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.738277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.738293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.738300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.738309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.738324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.748265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.748329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.748346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.748353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.748359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.748373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.758196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.758299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.758316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.758323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.758329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.178 [2024-07-24 23:17:40.758344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.178 qpair failed and we were unable to recover it. 00:29:23.178 [2024-07-24 23:17:40.768328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.178 [2024-07-24 23:17:40.768394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.178 [2024-07-24 23:17:40.768411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.178 [2024-07-24 23:17:40.768417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.178 [2024-07-24 23:17:40.768423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.768437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.778381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.778458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.778484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.778492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.778499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.778518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.788363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.788461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.788483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.788490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.788497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.788516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.798450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.798546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.798564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.798571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.798577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.798592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.808429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.808502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.808528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.808536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.808543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.808562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.818463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.818539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.818564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.818572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.818579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.818598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.828435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.828539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.828557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.828565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.828575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.828591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.838441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.838508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.838526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.838533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.838539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.838554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.848542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.848610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.848627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.848635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.848641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.848655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.858550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.858617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.858634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.858641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.858648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1086aa0 00:29:23.179 [2024-07-24 23:17:40.858662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.859057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084820 is same with the state(5) to be set 00:29:23.179 [2024-07-24 23:17:40.868649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.868825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.868889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.868914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.868933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6c04000b90 00:29:23.179 [2024-07-24 23:17:40.868988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.878606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.878684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.179 [2024-07-24 23:17:40.878718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.179 [2024-07-24 23:17:40.878734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.179 [2024-07-24 23:17:40.878747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6c04000b90 00:29:23.179 [2024-07-24 23:17:40.878801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.179 qpair failed and we were unable to recover it. 00:29:23.179 [2024-07-24 23:17:40.888748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.179 [2024-07-24 23:17:40.888923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.180 [2024-07-24 23:17:40.888987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.180 [2024-07-24 23:17:40.889012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.180 [2024-07-24 23:17:40.889033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6bf4000b90 00:29:23.180 [2024-07-24 23:17:40.889087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.180 qpair failed and we were unable to recover it. 00:29:23.180 [2024-07-24 23:17:40.898655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.180 [2024-07-24 23:17:40.898783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.180 [2024-07-24 23:17:40.898818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.180 [2024-07-24 23:17:40.898833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.180 [2024-07-24 23:17:40.898847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6bf4000b90 00:29:23.180 [2024-07-24 23:17:40.898879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.180 qpair failed and we were unable to recover it. 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Write completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 Read completed with error (sct=0, sc=8) 00:29:23.180 starting I/O failed 00:29:23.180 [2024-07-24 23:17:40.899242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.180 [2024-07-24 23:17:40.908640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.180 [2024-07-24 23:17:40.908698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.180 [2024-07-24 23:17:40.908713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.180 [2024-07-24 23:17:40.908719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.180 [2024-07-24 23:17:40.908723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6bfc000b90 00:29:23.180 [2024-07-24 23:17:40.908737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.180 qpair failed and we were unable to recover it. 00:29:23.180 [2024-07-24 23:17:40.918714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.180 [2024-07-24 23:17:40.918777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.180 [2024-07-24 23:17:40.918797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.180 [2024-07-24 23:17:40.918802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.180 [2024-07-24 23:17:40.918807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6bfc000b90 00:29:23.180 [2024-07-24 23:17:40.918819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.180 qpair failed and we were unable to recover it. 00:29:23.180 [2024-07-24 23:17:40.919370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1084820 (9): Bad file descriptor 00:29:23.180 Initializing NVMe Controllers 00:29:23.180 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:23.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:23.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:23.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:23.180 Initialization complete. Launching workers. 00:29:23.180 Starting thread on core 1 00:29:23.180 Starting thread on core 2 00:29:23.180 Starting thread on core 3 00:29:23.180 Starting thread on core 0 00:29:23.180 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:23.180 00:29:23.180 real 0m11.304s 00:29:23.180 user 0m20.997s 00:29:23.180 sys 0m4.198s 00:29:23.180 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.180 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 ************************************ 00:29:23.180 END TEST nvmf_target_disconnect_tc2 00:29:23.180 ************************************ 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.441 23:17:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.441 rmmod nvme_tcp 00:29:23.441 rmmod nvme_fabrics 00:29:23.441 rmmod nvme_keyring 00:29:23.441 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1047798 ']' 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1047798 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1047798 ']' 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1047798 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047798 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047798' 00:29:23.442 killing process with pid 1047798 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1047798 00:29:23.442 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1047798 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.703 23:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:25.618 00:29:25.618 real 0m22.106s 00:29:25.618 user 0m48.486s 00:29:25.618 sys 0m10.637s 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.618 ************************************ 00:29:25.618 END TEST nvmf_target_disconnect 00:29:25.618 ************************************ 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:25.618 00:29:25.618 real 6m32.809s 00:29:25.618 user 11m11.515s 00:29:25.618 sys 2m18.177s 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.618 23:17:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.618 ************************************ 00:29:25.618 END TEST nvmf_host 00:29:25.618 ************************************ 00:29:25.618 00:29:25.618 real 23m27.962s 00:29:25.618 user 47m32.276s 00:29:25.618 sys 7m39.746s 00:29:25.618 23:17:43 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.618 23:17:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.618 ************************************ 00:29:25.618 END TEST nvmf_tcp 00:29:25.618 ************************************ 00:29:25.879 23:17:43 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:25.879 23:17:43 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:25.879 23:17:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:25.879 23:17:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:25.879 23:17:43 -- common/autotest_common.sh@10 -- # set +x 00:29:25.879 ************************************ 00:29:25.879 START TEST spdkcli_nvmf_tcp 00:29:25.879 ************************************ 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:25.879 * Looking for test storage... 00:29:25.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1049633 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1049633 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1049633 ']' 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.879 23:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.879 [2024-07-24 23:17:43.658270] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:25.879 [2024-07-24 23:17:43.658340] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049633 ] 00:29:26.140 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.140 [2024-07-24 23:17:43.732275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:26.140 [2024-07-24 23:17:43.806843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.140 [2024-07-24 23:17:43.806846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.711 23:17:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:26.711 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:26.711 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:26.711 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:26.711 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:26.711 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:26.711 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:26.711 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:26.711 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:26.711 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:26.711 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:26.711 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:26.711 ' 00:29:29.254 [2024-07-24 23:17:46.804299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.196 [2024-07-24 23:17:47.968253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:32.738 [2024-07-24 23:17:50.106627] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:34.651 [2024-07-24 23:17:52.060481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:36.036 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:36.036 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:36.036 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:36.037 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:36.037 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:36.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:36.037 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:36.037 23:17:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.608 23:17:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:36.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:36.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:36.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:36.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:36.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:36.608 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:36.608 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:36.608 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:36.608 ' 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:41.938 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:41.938 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:41.938 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:41.938 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1049633 ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1049633' 00:29:41.938 killing process with pid 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1049633 ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1049633 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1049633 ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1049633 00:29:41.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1049633) - No such process 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1049633 is not found' 00:29:41.938 Process with pid 1049633 is not found 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:41.938 00:29:41.938 real 0m15.828s 00:29:41.938 user 0m32.984s 00:29:41.938 sys 0m0.741s 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.938 23:17:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 ************************************ 00:29:41.938 END TEST spdkcli_nvmf_tcp 00:29:41.938 ************************************ 00:29:41.938 23:17:59 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:41.938 23:17:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:41.938 23:17:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.938 23:17:59 -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 ************************************ 00:29:41.938 START TEST nvmf_identify_passthru 00:29:41.938 ************************************ 00:29:41.938 23:17:59 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:41.938 * Looking for test storage... 00:29:41.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:41.938 23:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.938 23:17:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.938 23:17:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.938 23:17:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.938 23:17:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.938 23:17:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.938 23:17:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.938 23:17:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:41.938 23:17:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.938 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.938 23:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.938 23:17:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.938 23:17:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.939 23:17:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.939 23:17:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.939 23:17:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.939 23:17:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.939 23:17:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:41.939 23:17:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.939 23:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.939 23:17:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:41.939 23:17:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:41.939 23:17:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:41.939 23:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:50.092 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:50.092 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:50.092 Found net devices under 0000:31:00.0: cvl_0_0 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:50.092 Found net devices under 0000:31:00.1: cvl_0_1 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:50.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:29:50.092 00:29:50.092 --- 10.0.0.2 ping statistics --- 00:29:50.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.092 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:29:50.092 00:29:50.092 --- 10.0.0.1 ping statistics --- 00:29:50.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.092 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:29:50.092 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:50.093 23:18:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:29:50.093 23:18:07 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:50.093 23:18:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:50.093 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.353 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:29:50.613 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:50.613 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:50.613 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:50.613 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1057173 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:50.873 23:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1057173 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1057173 ']' 00:29:50.873 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.133 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.133 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.134 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.134 23:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.134 [2024-07-24 23:18:08.708746] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:29:51.134 [2024-07-24 23:18:08.708804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.134 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.134 [2024-07-24 23:18:08.782125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.134 [2024-07-24 23:18:08.850388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.134 [2024-07-24 23:18:08.850426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.134 [2024-07-24 23:18:08.850433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.134 [2024-07-24 23:18:08.850440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.134 [2024-07-24 23:18:08.850446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.134 [2024-07-24 23:18:08.850582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.134 [2024-07-24 23:18:08.850701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.134 [2024-07-24 23:18:08.850857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.134 [2024-07-24 23:18:08.850858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:51.705 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.705 INFO: Log level set to 20 00:29:51.705 INFO: Requests: 00:29:51.705 { 00:29:51.705 "jsonrpc": "2.0", 00:29:51.705 "method": "nvmf_set_config", 00:29:51.705 "id": 1, 00:29:51.705 "params": { 00:29:51.705 "admin_cmd_passthru": { 00:29:51.705 "identify_ctrlr": true 00:29:51.705 } 00:29:51.705 } 00:29:51.705 } 00:29:51.705 00:29:51.705 INFO: response: 00:29:51.705 { 00:29:51.705 "jsonrpc": "2.0", 00:29:51.705 "id": 1, 00:29:51.705 "result": true 00:29:51.705 } 00:29:51.705 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.705 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.705 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.705 INFO: Setting log level to 20 00:29:51.705 INFO: Setting log level to 20 00:29:51.705 INFO: Log level set to 20 00:29:51.705 INFO: Log level set to 20 00:29:51.705 INFO: Requests: 00:29:51.705 { 00:29:51.705 "jsonrpc": "2.0", 00:29:51.705 "method": "framework_start_init", 00:29:51.705 "id": 1 00:29:51.705 } 00:29:51.705 00:29:51.705 INFO: Requests: 00:29:51.705 { 00:29:51.705 "jsonrpc": "2.0", 00:29:51.705 "method": "framework_start_init", 00:29:51.705 "id": 1 00:29:51.705 } 00:29:51.705 00:29:51.966 [2024-07-24 23:18:09.545168] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:51.966 INFO: response: 00:29:51.966 { 00:29:51.966 "jsonrpc": "2.0", 00:29:51.966 "id": 1, 00:29:51.966 "result": true 00:29:51.966 } 00:29:51.966 00:29:51.966 INFO: response: 00:29:51.966 { 00:29:51.966 "jsonrpc": "2.0", 00:29:51.966 "id": 1, 00:29:51.966 "result": true 00:29:51.966 } 00:29:51.966 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.966 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.966 INFO: Setting log level to 40 00:29:51.966 INFO: Setting log level to 40 00:29:51.966 INFO: Setting log level to 40 00:29:51.966 [2024-07-24 23:18:09.558485] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.966 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.966 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.967 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:51.967 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.967 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.227 Nvme0n1 00:29:52.227 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.227 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:52.227 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.227 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.227 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.227 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:52.227 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.228 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.228 [2024-07-24 23:18:09.949020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.228 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.228 [ 00:29:52.228 { 00:29:52.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:52.228 "subtype": "Discovery", 00:29:52.228 "listen_addresses": [], 00:29:52.228 "allow_any_host": true, 00:29:52.228 "hosts": [] 00:29:52.228 }, 00:29:52.228 { 00:29:52.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.228 "subtype": "NVMe", 00:29:52.228 "listen_addresses": [ 00:29:52.228 { 00:29:52.228 "trtype": "TCP", 00:29:52.228 "adrfam": "IPv4", 00:29:52.228 "traddr": "10.0.0.2", 00:29:52.228 "trsvcid": "4420" 00:29:52.228 } 00:29:52.228 ], 00:29:52.228 "allow_any_host": true, 00:29:52.228 "hosts": [], 00:29:52.228 "serial_number": "SPDK00000000000001", 00:29:52.228 "model_number": "SPDK bdev Controller", 00:29:52.228 "max_namespaces": 1, 00:29:52.228 "min_cntlid": 1, 00:29:52.228 "max_cntlid": 65519, 00:29:52.228 "namespaces": [ 00:29:52.228 { 00:29:52.228 "nsid": 1, 00:29:52.228 "bdev_name": "Nvme0n1", 00:29:52.228 "name": "Nvme0n1", 00:29:52.228 "nguid": "363447305260549900253845000000A3", 00:29:52.228 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:29:52.228 } 00:29:52.228 ] 00:29:52.228 } 00:29:52.228 ] 00:29:52.228 23:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.228 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:52.228 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:52.228 23:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:52.228 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.488 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:29:52.489 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:52.489 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:52.489 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:52.489 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:52.749 23:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.749 rmmod nvme_tcp 00:29:52.749 rmmod nvme_fabrics 00:29:52.749 rmmod nvme_keyring 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1057173 ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1057173 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1057173 ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1057173 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1057173 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1057173' 00:29:52.749 killing process with pid 1057173 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1057173 00:29:52.749 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1057173 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.010 23:18:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.010 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:53.010 23:18:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.558 23:18:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.558 00:29:55.558 real 0m13.431s 00:29:55.558 user 0m9.993s 00:29:55.558 sys 0m6.703s 00:29:55.558 23:18:12 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:55.558 23:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.558 ************************************ 00:29:55.558 END TEST nvmf_identify_passthru 00:29:55.558 ************************************ 00:29:55.558 23:18:12 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:55.558 23:18:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:55.558 23:18:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.558 23:18:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.558 ************************************ 00:29:55.558 START TEST nvmf_dif 00:29:55.558 ************************************ 00:29:55.559 23:18:12 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:55.559 * Looking for test storage... 00:29:55.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.559 23:18:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.559 23:18:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.559 23:18:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.559 23:18:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.559 23:18:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.559 23:18:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.559 23:18:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.559 23:18:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.559 23:18:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:55.559 23:18:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.559 23:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:55.559 23:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:55.559 23:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:55.559 23:18:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:55.559 23:18:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.559 23:18:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:55.559 23:18:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:55.559 23:18:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:55.559 23:18:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.698 23:18:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:03.699 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:03.699 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:03.699 Found net devices under 0000:31:00.0: cvl_0_0 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:03.699 Found net devices under 0000:31:00.1: cvl_0_1 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.699 23:18:20 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:30:03.699 00:30:03.699 --- 10.0.0.2 ping statistics --- 00:30:03.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.699 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.411 ms 00:30:03.699 00:30:03.699 --- 10.0.0.1 ping statistics --- 00:30:03.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.699 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:03.699 23:18:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:07.908 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:07.908 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:07.908 23:18:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:07.908 23:18:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:07.908 23:18:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:07.908 23:18:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:07.908 23:18:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1064263 00:30:07.908 23:18:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1064263 00:30:07.908 23:18:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1064263 ']' 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.908 23:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:07.908 [2024-07-24 23:18:25.078930] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:30:07.908 [2024-07-24 23:18:25.078978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.908 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.908 [2024-07-24 23:18:25.153102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.908 [2024-07-24 23:18:25.221097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.908 [2024-07-24 23:18:25.221132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.908 [2024-07-24 23:18:25.221140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.908 [2024-07-24 23:18:25.221146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.908 [2024-07-24 23:18:25.221152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.908 [2024-07-24 23:18:25.221174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:08.169 23:18:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.169 23:18:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.169 23:18:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:08.169 23:18:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.169 [2024-07-24 23:18:25.883323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.169 23:18:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.169 ************************************ 00:30:08.169 START TEST fio_dif_1_default 00:30:08.169 ************************************ 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:08.169 bdev_null0 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.169 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:08.430 [2024-07-24 23:18:25.967667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.430 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.431 { 00:30:08.431 "params": { 00:30:08.431 "name": "Nvme$subsystem", 00:30:08.431 "trtype": "$TEST_TRANSPORT", 00:30:08.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.431 "adrfam": "ipv4", 00:30:08.431 "trsvcid": "$NVMF_PORT", 00:30:08.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.431 "hdgst": ${hdgst:-false}, 00:30:08.431 "ddgst": ${ddgst:-false} 00:30:08.431 }, 00:30:08.431 "method": "bdev_nvme_attach_controller" 00:30:08.431 } 00:30:08.431 EOF 00:30:08.431 )") 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:08.431 23:18:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.431 "params": { 00:30:08.431 "name": "Nvme0", 00:30:08.431 "trtype": "tcp", 00:30:08.431 "traddr": "10.0.0.2", 00:30:08.431 "adrfam": "ipv4", 00:30:08.431 "trsvcid": "4420", 00:30:08.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:08.431 "hdgst": false, 00:30:08.431 "ddgst": false 00:30:08.431 }, 00:30:08.431 "method": "bdev_nvme_attach_controller" 00:30:08.431 }' 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:08.431 23:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.691 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:08.691 fio-3.35 00:30:08.691 Starting 1 thread 00:30:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.924 00:30:20.924 filename0: (groupid=0, jobs=1): err= 0: pid=1064798: Wed Jul 24 23:18:37 2024 00:30:20.924 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10006msec) 00:30:20.924 slat (nsec): min=2862, max=41988, avg=5521.17, stdev=1249.56 00:30:20.924 clat (usec): min=41875, max=49319, avg=42024.26, stdev=483.83 00:30:20.924 lat (usec): min=41881, max=49333, avg=42029.78, stdev=483.84 00:30:20.924 clat percentiles (usec): 00:30:20.924 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:20.924 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:20.924 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:20.924 | 99.00th=[42730], 99.50th=[43254], 99.90th=[49546], 99.95th=[49546], 00:30:20.924 | 99.99th=[49546] 00:30:20.924 bw ( KiB/s): min= 352, max= 384, per=99.59%, avg=379.20, stdev=11.72, samples=20 00:30:20.924 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:30:20.924 lat (msec) : 50=100.00% 00:30:20.924 cpu : usr=95.69%, sys=4.11%, ctx=14, majf=0, minf=238 00:30:20.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:20.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.924 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:20.924 00:30:20.924 Run status group 0 (all jobs): 00:30:20.924 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10006-10006msec 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.924 00:30:20.924 real 0m11.279s 00:30:20.924 user 0m27.640s 00:30:20.924 sys 0m0.753s 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.924 23:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.924 ************************************ 00:30:20.924 END TEST fio_dif_1_default 00:30:20.924 ************************************ 00:30:20.924 23:18:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:20.925 23:18:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.925 23:18:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 ************************************ 00:30:20.925 START TEST fio_dif_1_multi_subsystems 00:30:20.925 ************************************ 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 bdev_null0 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 [2024-07-24 23:18:37.316537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 bdev_null1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.925 { 00:30:20.925 "params": { 00:30:20.925 "name": "Nvme$subsystem", 00:30:20.925 "trtype": "$TEST_TRANSPORT", 00:30:20.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.925 "adrfam": "ipv4", 00:30:20.925 "trsvcid": "$NVMF_PORT", 00:30:20.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.925 "hdgst": ${hdgst:-false}, 00:30:20.925 "ddgst": ${ddgst:-false} 00:30:20.925 }, 00:30:20.925 "method": "bdev_nvme_attach_controller" 00:30:20.925 } 00:30:20.925 EOF 00:30:20.925 )") 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.925 { 00:30:20.925 "params": { 00:30:20.925 "name": "Nvme$subsystem", 00:30:20.925 "trtype": "$TEST_TRANSPORT", 00:30:20.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.925 "adrfam": "ipv4", 00:30:20.925 "trsvcid": "$NVMF_PORT", 00:30:20.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.925 "hdgst": ${hdgst:-false}, 00:30:20.925 "ddgst": ${ddgst:-false} 00:30:20.925 }, 00:30:20.925 "method": "bdev_nvme_attach_controller" 00:30:20.925 } 00:30:20.925 EOF 00:30:20.925 )") 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:20.925 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:20.925 "params": { 00:30:20.925 "name": "Nvme0", 00:30:20.925 "trtype": "tcp", 00:30:20.925 "traddr": "10.0.0.2", 00:30:20.925 "adrfam": "ipv4", 00:30:20.925 "trsvcid": "4420", 00:30:20.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.925 "hdgst": false, 00:30:20.925 "ddgst": false 00:30:20.925 }, 00:30:20.925 "method": "bdev_nvme_attach_controller" 00:30:20.925 },{ 00:30:20.925 "params": { 00:30:20.925 "name": "Nvme1", 00:30:20.925 "trtype": "tcp", 00:30:20.926 "traddr": "10.0.0.2", 00:30:20.926 "adrfam": "ipv4", 00:30:20.926 "trsvcid": "4420", 00:30:20.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.926 "hdgst": false, 00:30:20.926 "ddgst": false 00:30:20.926 }, 00:30:20.926 "method": "bdev_nvme_attach_controller" 00:30:20.926 }' 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:20.926 23:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.926 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:20.926 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:20.926 fio-3.35 00:30:20.926 Starting 2 threads 00:30:20.926 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.986 00:30:30.986 filename0: (groupid=0, jobs=1): err= 0: pid=1067308: Wed Jul 24 23:18:48 2024 00:30:30.986 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:30:30.986 slat (nsec): min=5372, max=40069, avg=6188.56, stdev=1576.81 00:30:30.986 clat (usec): min=41862, max=43010, avg=41994.50, stdev=107.33 00:30:30.986 lat (usec): min=41867, max=43016, avg=42000.69, stdev=107.62 00:30:30.986 clat percentiles (usec): 00:30:30.986 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:30.986 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:30.986 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:30.986 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:30:30.986 | 99.99th=[43254] 00:30:30.986 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:30.986 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:30.986 lat (msec) : 50=100.00% 00:30:30.986 cpu : usr=96.83%, sys=2.97%, ctx=9, majf=0, minf=94 00:30:30.986 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:30.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.986 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.986 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:30.986 filename1: (groupid=0, jobs=1): err= 0: pid=1067309: Wed Jul 24 23:18:48 2024 00:30:30.986 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:30:30.986 slat (nsec): min=5382, max=51994, avg=6331.83, stdev=2200.65 00:30:30.986 clat (usec): min=41008, max=42577, avg=41980.63, stdev=77.17 00:30:30.986 lat (usec): min=41016, max=42612, avg=41986.96, stdev=77.50 00:30:30.986 clat percentiles (usec): 00:30:30.986 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:30:30.986 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:30.986 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:30.986 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:30.986 | 99.99th=[42730] 00:30:30.986 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:30.986 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:30.986 lat (msec) : 50=100.00% 00:30:30.986 cpu : usr=96.68%, sys=3.12%, ctx=10, majf=0, minf=141 00:30:30.986 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:30.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.986 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.986 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:30.986 00:30:30.986 Run status group 0 (all jobs): 00:30:30.986 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10038-10041msec 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.986 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 00:30:31.248 real 0m11.520s 00:30:31.248 user 0m37.395s 00:30:31.248 sys 0m0.950s 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 ************************************ 00:30:31.248 END TEST fio_dif_1_multi_subsystems 00:30:31.248 ************************************ 00:30:31.248 23:18:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:31.248 23:18:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:31.248 23:18:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 ************************************ 00:30:31.248 START TEST fio_dif_rand_params 00:30:31.248 ************************************ 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 bdev_null0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.248 [2024-07-24 23:18:48.912677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.248 { 00:30:31.248 "params": { 00:30:31.248 "name": "Nvme$subsystem", 00:30:31.248 "trtype": "$TEST_TRANSPORT", 00:30:31.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.248 "adrfam": "ipv4", 00:30:31.248 "trsvcid": "$NVMF_PORT", 00:30:31.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.248 "hdgst": ${hdgst:-false}, 00:30:31.248 "ddgst": ${ddgst:-false} 00:30:31.248 }, 00:30:31.248 "method": "bdev_nvme_attach_controller" 00:30:31.248 } 00:30:31.248 EOF 00:30:31.248 )") 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:31.248 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.249 "params": { 00:30:31.249 "name": "Nvme0", 00:30:31.249 "trtype": "tcp", 00:30:31.249 "traddr": "10.0.0.2", 00:30:31.249 "adrfam": "ipv4", 00:30:31.249 "trsvcid": "4420", 00:30:31.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.249 "hdgst": false, 00:30:31.249 "ddgst": false 00:30:31.249 }, 00:30:31.249 "method": "bdev_nvme_attach_controller" 00:30:31.249 }' 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.249 23:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.848 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:31.848 ... 00:30:31.848 fio-3.35 00:30:31.848 Starting 3 threads 00:30:31.848 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.424 00:30:38.424 filename0: (groupid=0, jobs=1): err= 0: pid=1069505: Wed Jul 24 23:18:54 2024 00:30:38.424 read: IOPS=128, BW=16.1MiB/s (16.9MB/s)(81.1MiB/5035msec) 00:30:38.424 slat (nsec): min=5400, max=32361, avg=7758.02, stdev=1881.80 00:30:38.424 clat (usec): min=6470, max=96822, avg=23259.32, stdev=21164.28 00:30:38.424 lat (usec): min=6478, max=96830, avg=23267.08, stdev=21164.39 00:30:38.424 clat percentiles (usec): 00:30:38.424 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8455], 00:30:38.424 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11731], 60.00th=[13304], 00:30:38.424 | 70.00th=[16057], 80.00th=[51119], 90.00th=[54264], 95.00th=[54789], 00:30:38.424 | 99.00th=[93848], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:30:38.424 | 99.99th=[96994] 00:30:38.424 bw ( KiB/s): min=12800, max=21504, per=30.81%, avg=16537.60, stdev=2666.44, samples=10 00:30:38.424 iops : min= 100, max= 168, avg=129.20, stdev=20.83, samples=10 00:30:38.424 lat (msec) : 10=36.67%, 20=34.82%, 50=4.01%, 100=24.50% 00:30:38.424 cpu : usr=96.40%, sys=3.36%, ctx=7, majf=0, minf=34 00:30:38.424 IO depths : 1=4.3%, 2=95.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 issued rwts: total=649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:38.424 filename0: (groupid=0, jobs=1): err= 0: pid=1069506: Wed Jul 24 23:18:54 2024 00:30:38.424 read: IOPS=154, BW=19.4MiB/s (20.3MB/s)(97.4MiB/5032msec) 00:30:38.424 slat (nsec): min=5380, max=33527, avg=7555.03, stdev=1784.96 00:30:38.424 clat (usec): min=6165, max=93517, avg=19349.09, stdev=18848.89 00:30:38.424 lat (usec): min=6173, max=93523, avg=19356.65, stdev=18849.02 00:30:38.424 clat percentiles (usec): 00:30:38.424 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8586], 00:30:38.424 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10945], 00:30:38.424 | 70.00th=[12387], 80.00th=[49021], 90.00th=[51119], 95.00th=[52167], 00:30:38.424 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:30:38.424 | 99.99th=[93848] 00:30:38.424 bw ( KiB/s): min=13568, max=28928, per=37.01%, avg=19861.00, stdev=4264.94, samples=10 00:30:38.424 iops : min= 106, max= 226, avg=155.10, stdev=33.27, samples=10 00:30:38.424 lat (msec) : 10=47.88%, 20=30.55%, 50=6.29%, 100=15.28% 00:30:38.424 cpu : usr=96.84%, sys=2.90%, ctx=9, majf=0, minf=93 00:30:38.424 IO depths : 1=5.0%, 2=95.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:38.424 filename0: (groupid=0, jobs=1): err= 0: pid=1069507: Wed Jul 24 23:18:54 2024 00:30:38.424 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(85.4MiB/5005msec) 00:30:38.424 slat (nsec): min=5371, max=31395, avg=7838.20, stdev=1955.62 00:30:38.424 clat (usec): min=6776, max=93624, avg=21971.70, stdev=21156.25 00:30:38.424 lat (usec): min=6782, max=93633, avg=21979.54, stdev=21156.21 00:30:38.424 clat percentiles (usec): 00:30:38.424 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8717], 00:30:38.424 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10552], 60.00th=[11731], 00:30:38.424 | 70.00th=[13173], 80.00th=[50070], 90.00th=[51643], 95.00th=[53216], 00:30:38.424 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:30:38.424 | 99.99th=[93848] 00:30:38.424 bw ( KiB/s): min= 9216, max=22272, per=32.44%, avg=17408.00, stdev=3812.40, samples=10 00:30:38.424 iops : min= 72, max= 174, avg=136.00, stdev=29.78, samples=10 00:30:38.424 lat (msec) : 10=40.70%, 20=33.24%, 50=6.30%, 100=19.77% 00:30:38.424 cpu : usr=96.90%, sys=2.86%, ctx=17, majf=0, minf=148 00:30:38.424 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.424 issued rwts: total=683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:38.424 00:30:38.424 Run status group 0 (all jobs): 00:30:38.424 READ: bw=52.4MiB/s (55.0MB/s), 16.1MiB/s-19.4MiB/s (16.9MB/s-20.3MB/s), io=264MiB (277MB), run=5005-5035msec 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 bdev_null0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.424 [2024-07-24 23:18:55.185026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.424 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 bdev_null1 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 bdev_null2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.425 { 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme$subsystem", 00:30:38.425 "trtype": "$TEST_TRANSPORT", 00:30:38.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "$NVMF_PORT", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.425 "hdgst": ${hdgst:-false}, 00:30:38.425 "ddgst": ${ddgst:-false} 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 } 00:30:38.425 EOF 00:30:38.425 )") 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.425 { 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme$subsystem", 00:30:38.425 "trtype": "$TEST_TRANSPORT", 00:30:38.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "$NVMF_PORT", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.425 "hdgst": ${hdgst:-false}, 00:30:38.425 "ddgst": ${ddgst:-false} 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 } 00:30:38.425 EOF 00:30:38.425 )") 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.425 { 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme$subsystem", 00:30:38.425 "trtype": "$TEST_TRANSPORT", 00:30:38.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "$NVMF_PORT", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.425 "hdgst": ${hdgst:-false}, 00:30:38.425 "ddgst": ${ddgst:-false} 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 } 00:30:38.425 EOF 00:30:38.425 )") 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme0", 00:30:38.425 "trtype": "tcp", 00:30:38.425 "traddr": "10.0.0.2", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "4420", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.425 "hdgst": false, 00:30:38.425 "ddgst": false 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 },{ 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme1", 00:30:38.425 "trtype": "tcp", 00:30:38.425 "traddr": "10.0.0.2", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "4420", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:38.425 "hdgst": false, 00:30:38.425 "ddgst": false 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 },{ 00:30:38.425 "params": { 00:30:38.425 "name": "Nvme2", 00:30:38.425 "trtype": "tcp", 00:30:38.425 "traddr": "10.0.0.2", 00:30:38.425 "adrfam": "ipv4", 00:30:38.425 "trsvcid": "4420", 00:30:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:38.425 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:38.425 "hdgst": false, 00:30:38.425 "ddgst": false 00:30:38.425 }, 00:30:38.425 "method": "bdev_nvme_attach_controller" 00:30:38.425 }' 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.425 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:38.426 23:18:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.426 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:38.426 ... 00:30:38.426 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:38.426 ... 00:30:38.426 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:38.426 ... 00:30:38.426 fio-3.35 00:30:38.426 Starting 24 threads 00:30:38.426 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.661 00:30:50.661 filename0: (groupid=0, jobs=1): err= 0: pid=1071018: Wed Jul 24 23:19:06 2024 00:30:50.661 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10007msec) 00:30:50.661 slat (nsec): min=5574, max=54501, avg=17009.57, stdev=9914.40 00:30:50.661 clat (usec): min=17382, max=69481, avg=32344.04, stdev=1851.79 00:30:50.661 lat (usec): min=17388, max=69504, avg=32361.05, stdev=1851.72 00:30:50.661 clat percentiles (usec): 00:30:50.661 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:30:50.661 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.661 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:50.661 | 99.00th=[34866], 99.50th=[39584], 99.90th=[53740], 99.95th=[53740], 00:30:50.661 | 99.99th=[69731] 00:30:50.661 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1966.42, stdev=73.35, samples=19 00:30:50.661 iops : min= 448, max= 512, avg=491.53, stdev=18.25, samples=19 00:30:50.661 lat (msec) : 20=0.14%, 50=99.45%, 100=0.41% 00:30:50.661 cpu : usr=97.81%, sys=1.22%, ctx=60, majf=0, minf=20 00:30:50.661 IO depths : 1=3.9%, 2=10.1%, 4=24.9%, 8=52.5%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.661 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.661 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.661 filename0: (groupid=0, jobs=1): err= 0: pid=1071019: Wed Jul 24 23:19:06 2024 00:30:50.661 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10001msec) 00:30:50.661 slat (nsec): min=5569, max=62834, avg=13522.49, stdev=9240.58 00:30:50.661 clat (usec): min=13985, max=57996, avg=32348.56, stdev=1907.33 00:30:50.661 lat (usec): min=13991, max=58017, avg=32362.08, stdev=1907.28 00:30:50.661 clat percentiles (usec): 00:30:50.661 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:30:50.661 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.661 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.661 | 99.00th=[34341], 99.50th=[35390], 99.90th=[57934], 99.95th=[57934], 00:30:50.661 | 99.99th=[57934] 00:30:50.661 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1966.53, stdev=76.87, samples=19 00:30:50.661 iops : min= 448, max= 512, avg=491.63, stdev=19.22, samples=19 00:30:50.661 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:30:50.661 cpu : usr=98.96%, sys=0.74%, ctx=38, majf=0, minf=18 00:30:50.661 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.661 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.661 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.661 filename0: (groupid=0, jobs=1): err= 0: pid=1071020: Wed Jul 24 23:19:06 2024 00:30:50.661 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.3MiB/10024msec) 00:30:50.661 slat (nsec): min=5529, max=55482, avg=8919.61, stdev=6226.25 00:30:50.661 clat (usec): min=2670, max=41786, avg=26823.84, stdev=5915.38 00:30:50.661 lat (usec): min=2694, max=41806, avg=26832.76, stdev=5917.22 00:30:50.661 clat percentiles (usec): 00:30:50.661 | 1.00th=[12256], 5.00th=[18482], 10.00th=[19530], 20.00th=[21365], 00:30:50.661 | 30.00th=[22414], 40.00th=[23725], 50.00th=[28967], 60.00th=[31851], 00:30:50.661 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:30:50.661 | 99.00th=[33424], 99.50th=[36963], 99.90th=[40633], 99.95th=[41157], 00:30:50.661 | 99.99th=[41681] 00:30:50.662 bw ( KiB/s): min= 1920, max= 2880, per=4.99%, avg=2380.55, stdev=373.87, samples=20 00:30:50.662 iops : min= 480, max= 720, avg=595.10, stdev=93.50, samples=20 00:30:50.662 lat (msec) : 4=0.27%, 10=0.65%, 20=10.32%, 50=88.76% 00:30:50.662 cpu : usr=99.15%, sys=0.59%, ctx=13, majf=0, minf=18 00:30:50.662 IO depths : 1=2.1%, 2=5.5%, 4=16.4%, 8=65.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=5962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename0: (groupid=0, jobs=1): err= 0: pid=1071021: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10023msec) 00:30:50.662 slat (nsec): min=5534, max=81198, avg=17295.59, stdev=12862.31 00:30:50.662 clat (usec): min=11459, max=35694, avg=31981.75, stdev=2210.65 00:30:50.662 lat (usec): min=11484, max=35720, avg=31999.05, stdev=2210.31 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[19530], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:50.662 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35914], 00:30:50.662 | 99.99th=[35914] 00:30:50.662 bw ( KiB/s): min= 1916, max= 2176, per=4.17%, avg=1989.50, stdev=77.43, samples=20 00:30:50.662 iops : min= 479, max= 544, avg=497.30, stdev=19.30, samples=20 00:30:50.662 lat (msec) : 20=1.60%, 50=98.40% 00:30:50.662 cpu : usr=99.31%, sys=0.41%, ctx=14, majf=0, minf=17 00:30:50.662 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename0: (groupid=0, jobs=1): err= 0: pid=1071022: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10013msec) 00:30:50.662 slat (nsec): min=5580, max=67243, avg=17093.55, stdev=10919.67 00:30:50.662 clat (usec): min=19230, max=49118, avg=32206.34, stdev=1972.83 00:30:50.662 lat (usec): min=19239, max=49131, avg=32223.44, stdev=1973.07 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[23200], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.662 | 99.00th=[39060], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:30:50.662 | 99.99th=[49021] 00:30:50.662 bw ( KiB/s): min= 1904, max= 2064, per=4.14%, avg=1976.16, stdev=64.29, samples=19 00:30:50.662 iops : min= 476, max= 516, avg=494.00, stdev=16.11, samples=19 00:30:50.662 lat (msec) : 20=0.30%, 50=99.70% 00:30:50.662 cpu : usr=97.41%, sys=1.46%, ctx=52, majf=0, minf=24 00:30:50.662 IO depths : 1=5.3%, 2=11.0%, 4=23.8%, 8=52.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename0: (groupid=0, jobs=1): err= 0: pid=1071023: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10019msec) 00:30:50.662 slat (nsec): min=5542, max=75560, avg=12760.69, stdev=10705.92 00:30:50.662 clat (usec): min=4777, max=36456, avg=31967.05, stdev=2715.91 00:30:50.662 lat (usec): min=4789, max=36464, avg=31979.81, stdev=2715.73 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[15533], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.662 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[36439], 00:30:50.662 | 99.99th=[36439] 00:30:50.662 bw ( KiB/s): min= 1916, max= 2352, per=4.17%, avg=1992.60, stdev=105.70, samples=20 00:30:50.662 iops : min= 479, max= 588, avg=498.15, stdev=26.43, samples=20 00:30:50.662 lat (msec) : 10=0.32%, 20=1.32%, 50=98.36% 00:30:50.662 cpu : usr=98.11%, sys=1.12%, ctx=627, majf=0, minf=23 00:30:50.662 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename0: (groupid=0, jobs=1): err= 0: pid=1071024: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10010msec) 00:30:50.662 slat (nsec): min=5536, max=73999, avg=18646.63, stdev=12175.45 00:30:50.662 clat (usec): min=21593, max=49741, avg=32391.14, stdev=2296.66 00:30:50.662 lat (usec): min=21619, max=49760, avg=32409.79, stdev=2296.35 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[22938], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:30:50.662 | 99.00th=[41681], 99.50th=[43254], 99.90th=[49546], 99.95th=[49546], 00:30:50.662 | 99.99th=[49546] 00:30:50.662 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1965.79, stdev=56.81, samples=19 00:30:50.662 iops : min= 480, max= 512, avg=491.37, stdev=14.27, samples=19 00:30:50.662 lat (msec) : 50=100.00% 00:30:50.662 cpu : usr=98.88%, sys=0.83%, ctx=11, majf=0, minf=23 00:30:50.662 IO depths : 1=5.0%, 2=10.0%, 4=21.8%, 8=55.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename0: (groupid=0, jobs=1): err= 0: pid=1071025: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=492, BW=1972KiB/s (2019kB/s)(19.3MiB/10010msec) 00:30:50.662 slat (nsec): min=5593, max=69594, avg=16560.36, stdev=10343.80 00:30:50.662 clat (usec): min=16603, max=49049, avg=32316.88, stdev=2967.14 00:30:50.662 lat (usec): min=16610, max=49060, avg=32333.44, stdev=2967.27 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[19268], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:30:50.662 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:30:50.662 | 99.99th=[49021] 00:30:50.662 bw ( KiB/s): min= 1920, max= 2052, per=4.13%, avg=1970.05, stdev=56.65, samples=19 00:30:50.662 iops : min= 480, max= 513, avg=492.47, stdev=14.20, samples=19 00:30:50.662 lat (msec) : 20=1.80%, 50=98.20% 00:30:50.662 cpu : usr=97.37%, sys=1.52%, ctx=102, majf=0, minf=26 00:30:50.662 IO depths : 1=4.7%, 2=9.5%, 4=21.1%, 8=56.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=93.4%, 8=1.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename1: (groupid=0, jobs=1): err= 0: pid=1071026: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10004msec) 00:30:50.662 slat (nsec): min=5416, max=68241, avg=13103.72, stdev=9788.19 00:30:50.662 clat (usec): min=5211, max=74788, avg=32290.34, stdev=2901.34 00:30:50.662 lat (usec): min=5217, max=74816, avg=32303.45, stdev=2901.33 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[23725], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.662 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:50.662 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.662 | 99.00th=[35390], 99.50th=[36439], 99.90th=[64226], 99.95th=[64226], 00:30:50.662 | 99.99th=[74974] 00:30:50.662 bw ( KiB/s): min= 1779, max= 2048, per=4.12%, avg=1966.42, stdev=70.05, samples=19 00:30:50.662 iops : min= 444, max= 512, avg=491.53, stdev=17.59, samples=19 00:30:50.662 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:30:50.662 cpu : usr=99.23%, sys=0.51%, ctx=15, majf=0, minf=24 00:30:50.662 IO depths : 1=0.1%, 2=6.3%, 4=24.9%, 8=56.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.662 filename1: (groupid=0, jobs=1): err= 0: pid=1071027: Wed Jul 24 23:19:06 2024 00:30:50.662 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10007msec) 00:30:50.662 slat (nsec): min=5531, max=69207, avg=17778.46, stdev=11281.51 00:30:50.662 clat (usec): min=11563, max=45431, avg=31800.82, stdev=2701.35 00:30:50.662 lat (usec): min=11569, max=45462, avg=31818.60, stdev=2702.98 00:30:50.662 clat percentiles (usec): 00:30:50.662 | 1.00th=[20579], 5.00th=[25297], 10.00th=[31327], 20.00th=[31851], 00:30:50.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.662 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:50.662 | 99.00th=[39060], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:30:50.662 | 99.99th=[45351] 00:30:50.662 bw ( KiB/s): min= 1916, max= 2176, per=4.19%, avg=2001.26, stdev=87.55, samples=19 00:30:50.662 iops : min= 479, max= 544, avg=500.32, stdev=21.89, samples=19 00:30:50.662 lat (msec) : 20=0.68%, 50=99.32% 00:30:50.662 cpu : usr=99.10%, sys=0.63%, ctx=17, majf=0, minf=20 00:30:50.662 IO depths : 1=4.7%, 2=10.1%, 4=22.4%, 8=54.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:30:50.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.662 issued rwts: total=5010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071028: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10018msec) 00:30:50.663 slat (nsec): min=5557, max=62052, avg=11905.08, stdev=7793.78 00:30:50.663 clat (usec): min=7607, max=35295, avg=31908.54, stdev=2427.21 00:30:50.663 lat (usec): min=7616, max=35302, avg=31920.45, stdev=2427.38 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[19792], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:30:50.663 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.663 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.663 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:30:50.663 | 99.99th=[35390] 00:30:50.663 bw ( KiB/s): min= 1916, max= 2176, per=4.18%, avg=1996.60, stdev=87.30, samples=20 00:30:50.663 iops : min= 479, max= 544, avg=499.15, stdev=21.83, samples=20 00:30:50.663 lat (msec) : 10=0.18%, 20=1.10%, 50=98.72% 00:30:50.663 cpu : usr=97.15%, sys=1.65%, ctx=38, majf=0, minf=27 00:30:50.663 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071029: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10015msec) 00:30:50.663 slat (nsec): min=5555, max=64459, avg=12321.86, stdev=9187.29 00:30:50.663 clat (usec): min=23658, max=36912, avg=32307.18, stdev=980.11 00:30:50.663 lat (usec): min=23665, max=36944, avg=32319.50, stdev=979.66 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:30:50.663 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:50.663 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.663 | 99.00th=[34866], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:30:50.663 | 99.99th=[36963] 00:30:50.663 bw ( KiB/s): min= 1920, max= 2052, per=4.13%, avg=1971.40, stdev=64.59, samples=20 00:30:50.663 iops : min= 480, max= 513, avg=492.85, stdev=16.15, samples=20 00:30:50.663 lat (msec) : 50=100.00% 00:30:50.663 cpu : usr=98.85%, sys=0.74%, ctx=92, majf=0, minf=18 00:30:50.663 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071030: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10043msec) 00:30:50.663 slat (nsec): min=5557, max=73188, avg=18784.11, stdev=11490.21 00:30:50.663 clat (usec): min=13557, max=59949, avg=32407.14, stdev=2092.28 00:30:50.663 lat (usec): min=13564, max=59971, avg=32425.93, stdev=2091.27 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.663 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.663 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.663 | 99.00th=[34866], 99.50th=[54264], 99.90th=[60031], 99.95th=[60031], 00:30:50.663 | 99.99th=[60031] 00:30:50.663 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1966.68, stdev=76.31, samples=19 00:30:50.663 iops : min= 448, max= 512, avg=491.63, stdev=19.04, samples=19 00:30:50.663 lat (msec) : 20=0.08%, 50=99.35%, 100=0.57% 00:30:50.663 cpu : usr=98.83%, sys=0.73%, ctx=11, majf=0, minf=19 00:30:50.663 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071031: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=504, BW=2018KiB/s (2067kB/s)(19.7MiB/10005msec) 00:30:50.663 slat (nsec): min=5530, max=66952, avg=15517.27, stdev=10794.02 00:30:50.663 clat (usec): min=6855, max=56175, avg=31573.53, stdev=3880.84 00:30:50.663 lat (usec): min=6862, max=56197, avg=31589.04, stdev=3882.25 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[19268], 5.00th=[23987], 10.00th=[27395], 20.00th=[31589], 00:30:50.663 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.663 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[34341], 00:30:50.663 | 99.00th=[41157], 99.50th=[49021], 99.90th=[56361], 99.95th=[56361], 00:30:50.663 | 99.99th=[56361] 00:30:50.663 bw ( KiB/s): min= 1792, max= 2244, per=4.21%, avg=2010.89, stdev=121.53, samples=19 00:30:50.663 iops : min= 448, max= 561, avg=502.68, stdev=30.31, samples=19 00:30:50.663 lat (msec) : 10=0.18%, 20=1.37%, 50=97.98%, 100=0.48% 00:30:50.663 cpu : usr=99.18%, sys=0.55%, ctx=11, majf=0, minf=17 00:30:50.663 IO depths : 1=4.0%, 2=8.8%, 4=20.1%, 8=58.1%, 16=9.1%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=92.8%, 8=2.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=5048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071032: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10023msec) 00:30:50.663 slat (nsec): min=5553, max=73011, avg=13312.28, stdev=9103.13 00:30:50.663 clat (usec): min=16427, max=57037, avg=33189.50, stdev=5108.96 00:30:50.663 lat (usec): min=16434, max=57057, avg=33202.81, stdev=5109.01 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[19006], 5.00th=[25560], 10.00th=[30802], 20.00th=[31851], 00:30:50.663 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:30:50.663 | 70.00th=[32900], 80.00th=[33424], 90.00th=[40109], 95.00th=[44303], 00:30:50.663 | 99.00th=[49546], 99.50th=[50594], 99.90th=[53216], 99.95th=[56886], 00:30:50.663 | 99.99th=[56886] 00:30:50.663 bw ( KiB/s): min= 1808, max= 2048, per=4.02%, avg=1921.80, stdev=58.74, samples=20 00:30:50.663 iops : min= 452, max= 512, avg=480.45, stdev=14.68, samples=20 00:30:50.663 lat (msec) : 20=1.37%, 50=97.88%, 100=0.75% 00:30:50.663 cpu : usr=97.37%, sys=1.51%, ctx=97, majf=0, minf=18 00:30:50.663 IO depths : 1=1.6%, 2=3.9%, 4=14.5%, 8=67.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename1: (groupid=0, jobs=1): err= 0: pid=1071033: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=502, BW=2012KiB/s (2060kB/s)(19.7MiB/10018msec) 00:30:50.663 slat (nsec): min=5540, max=75255, avg=13978.56, stdev=10209.39 00:30:50.663 clat (usec): min=14216, max=50065, avg=31702.13, stdev=3336.91 00:30:50.663 lat (usec): min=14222, max=50079, avg=31716.11, stdev=3337.44 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[16450], 5.00th=[23462], 10.00th=[31065], 20.00th=[31851], 00:30:50.663 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.663 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:30:50.663 | 99.00th=[41681], 99.50th=[42730], 99.90th=[50070], 99.95th=[50070], 00:30:50.663 | 99.99th=[50070] 00:30:50.663 bw ( KiB/s): min= 1916, max= 2288, per=4.20%, avg=2007.90, stdev=104.96, samples=20 00:30:50.663 iops : min= 479, max= 572, avg=501.90, stdev=26.22, samples=20 00:30:50.663 lat (msec) : 20=1.67%, 50=98.23%, 100=0.10% 00:30:50.663 cpu : usr=99.22%, sys=0.48%, ctx=52, majf=0, minf=35 00:30:50.663 IO depths : 1=5.0%, 2=10.6%, 4=22.9%, 8=53.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:30:50.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.663 issued rwts: total=5038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.663 filename2: (groupid=0, jobs=1): err= 0: pid=1071034: Wed Jul 24 23:19:06 2024 00:30:50.663 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10005msec) 00:30:50.663 slat (nsec): min=5480, max=65583, avg=16031.17, stdev=10874.24 00:30:50.663 clat (usec): min=5587, max=64937, avg=33507.33, stdev=5241.33 00:30:50.663 lat (usec): min=5594, max=64960, avg=33523.37, stdev=5240.51 00:30:50.663 clat percentiles (usec): 00:30:50.663 | 1.00th=[17695], 5.00th=[28967], 10.00th=[31589], 20.00th=[31851], 00:30:50.663 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:30:50.663 | 70.00th=[32900], 80.00th=[33424], 90.00th=[41157], 95.00th=[44827], 00:30:50.663 | 99.00th=[49546], 99.50th=[51643], 99.90th=[64750], 99.95th=[64750], 00:30:50.663 | 99.99th=[64750] 00:30:50.663 bw ( KiB/s): min= 1520, max= 2048, per=3.96%, avg=1891.05, stdev=123.16, samples=19 00:30:50.664 iops : min= 380, max= 512, avg=472.68, stdev=30.82, samples=19 00:30:50.664 lat (msec) : 10=0.34%, 20=0.95%, 50=97.79%, 100=0.92% 00:30:50.664 cpu : usr=98.98%, sys=0.67%, ctx=72, majf=0, minf=21 00:30:50.664 IO depths : 1=2.6%, 2=5.3%, 4=15.7%, 8=64.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=92.3%, 8=3.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071035: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.3MiB/10002msec) 00:30:50.664 slat (nsec): min=5547, max=56353, avg=8300.67, stdev=4573.53 00:30:50.664 clat (usec): min=21280, max=47680, avg=32256.21, stdev=1501.48 00:30:50.664 lat (usec): min=21318, max=47689, avg=32264.51, stdev=1501.14 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[23987], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.664 | 99.00th=[34866], 99.50th=[35390], 99.90th=[47449], 99.95th=[47449], 00:30:50.664 | 99.99th=[47449] 00:30:50.664 bw ( KiB/s): min= 1920, max= 2096, per=4.14%, avg=1976.74, stdev=68.52, samples=19 00:30:50.664 iops : min= 480, max= 524, avg=494.11, stdev=17.20, samples=19 00:30:50.664 lat (msec) : 50=100.00% 00:30:50.664 cpu : usr=99.18%, sys=0.55%, ctx=27, majf=0, minf=23 00:30:50.664 IO depths : 1=5.7%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071036: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:30:50.664 slat (nsec): min=5615, max=75397, avg=19622.16, stdev=11041.53 00:30:50.664 clat (usec): min=5334, max=54785, avg=32186.60, stdev=2310.28 00:30:50.664 lat (usec): min=5340, max=54807, avg=32206.22, stdev=2310.68 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:50.664 | 99.00th=[34341], 99.50th=[35390], 99.90th=[54789], 99.95th=[54789], 00:30:50.664 | 99.99th=[54789] 00:30:50.664 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1967.26, stdev=76.11, samples=19 00:30:50.664 iops : min= 448, max= 512, avg=491.74, stdev=19.15, samples=19 00:30:50.664 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:30:50.664 cpu : usr=99.12%, sys=0.61%, ctx=15, majf=0, minf=23 00:30:50.664 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071037: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10011msec) 00:30:50.664 slat (nsec): min=5575, max=84376, avg=17842.68, stdev=13951.98 00:30:50.664 clat (usec): min=11913, max=35576, avg=32131.92, stdev=1731.76 00:30:50.664 lat (usec): min=11926, max=35606, avg=32149.77, stdev=1731.58 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[21365], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.664 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:30:50.664 | 99.99th=[35390] 00:30:50.664 bw ( KiB/s): min= 1916, max= 2176, per=4.15%, avg=1979.89, stdev=78.02, samples=19 00:30:50.664 iops : min= 479, max= 544, avg=494.89, stdev=19.44, samples=19 00:30:50.664 lat (msec) : 20=0.85%, 50=99.15% 00:30:50.664 cpu : usr=99.25%, sys=0.46%, ctx=21, majf=0, minf=23 00:30:50.664 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071038: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10018msec) 00:30:50.664 slat (nsec): min=5541, max=80649, avg=17563.29, stdev=12842.24 00:30:50.664 clat (usec): min=9719, max=35693, avg=32060.75, stdev=1976.61 00:30:50.664 lat (usec): min=9761, max=35707, avg=32078.31, stdev=1976.34 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[20055], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.664 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:30:50.664 | 99.99th=[35914] 00:30:50.664 bw ( KiB/s): min= 1916, max= 2176, per=4.15%, avg=1983.10, stdev=77.63, samples=20 00:30:50.664 iops : min= 479, max= 544, avg=495.70, stdev=19.35, samples=20 00:30:50.664 lat (msec) : 10=0.02%, 20=0.80%, 50=99.18% 00:30:50.664 cpu : usr=97.26%, sys=1.55%, ctx=98, majf=0, minf=22 00:30:50.664 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071039: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:30:50.664 slat (nsec): min=5670, max=70914, avg=19509.11, stdev=11871.75 00:30:50.664 clat (usec): min=17381, max=53793, avg=32304.96, stdev=1607.42 00:30:50.664 lat (usec): min=17389, max=53810, avg=32324.47, stdev=1606.93 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33162], 00:30:50.664 | 99.00th=[34341], 99.50th=[35390], 99.90th=[53740], 99.95th=[53740], 00:30:50.664 | 99.99th=[53740] 00:30:50.664 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1966.42, stdev=76.02, samples=19 00:30:50.664 iops : min= 448, max= 512, avg=491.53, stdev=18.92, samples=19 00:30:50.664 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:30:50.664 cpu : usr=99.23%, sys=0.50%, ctx=13, majf=0, minf=20 00:30:50.664 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071040: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=505, BW=2022KiB/s (2071kB/s)(19.8MiB/10004msec) 00:30:50.664 slat (nsec): min=5494, max=61792, avg=13311.82, stdev=9708.38 00:30:50.664 clat (usec): min=7019, max=54812, avg=31538.42, stdev=4712.49 00:30:50.664 lat (usec): min=7025, max=54838, avg=31551.73, stdev=4713.20 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[18482], 5.00th=[21890], 10.00th=[25560], 20.00th=[31589], 00:30:50.664 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:50.664 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33817], 00:30:50.664 | 99.00th=[50070], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:30:50.664 | 99.99th=[54789] 00:30:50.664 bw ( KiB/s): min= 1795, max= 2272, per=4.22%, avg=2017.84, stdev=125.03, samples=19 00:30:50.664 iops : min= 448, max= 568, avg=504.42, stdev=31.33, samples=19 00:30:50.664 lat (msec) : 10=0.20%, 20=1.42%, 50=97.29%, 100=1.09% 00:30:50.664 cpu : usr=98.99%, sys=0.67%, ctx=80, majf=0, minf=29 00:30:50.664 IO depths : 1=2.7%, 2=7.8%, 4=21.5%, 8=58.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:50.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.664 issued rwts: total=5058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.664 filename2: (groupid=0, jobs=1): err= 0: pid=1071041: Wed Jul 24 23:19:06 2024 00:30:50.664 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10013msec) 00:30:50.664 slat (nsec): min=5570, max=67443, avg=12213.85, stdev=8748.76 00:30:50.664 clat (usec): min=20339, max=38540, avg=32293.90, stdev=1112.76 00:30:50.664 lat (usec): min=20346, max=38556, avg=32306.11, stdev=1112.57 00:30:50.664 clat percentiles (usec): 00:30:50.664 | 1.00th=[30016], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:30:50.664 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:50.664 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:30:50.664 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:30:50.664 | 99.99th=[38536] 00:30:50.664 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1973.63, stdev=64.62, samples=19 00:30:50.664 iops : min= 480, max= 512, avg=493.37, stdev=16.11, samples=19 00:30:50.664 lat (msec) : 50=100.00% 00:30:50.664 cpu : usr=97.53%, sys=1.37%, ctx=109, majf=0, minf=26 00:30:50.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:50.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.665 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:50.665 00:30:50.665 Run status group 0 (all jobs): 00:30:50.665 READ: bw=46.6MiB/s (48.9MB/s), 1903KiB/s-2379KiB/s (1949kB/s-2436kB/s), io=468MiB (491MB), run=10001-10043msec 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 bdev_null0 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 [2024-07-24 23:19:07.093068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 bdev_null1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.665 { 00:30:50.665 "params": { 00:30:50.665 "name": "Nvme$subsystem", 00:30:50.665 "trtype": "$TEST_TRANSPORT", 00:30:50.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.665 "adrfam": "ipv4", 00:30:50.665 "trsvcid": "$NVMF_PORT", 00:30:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.665 "hdgst": ${hdgst:-false}, 00:30:50.665 "ddgst": ${ddgst:-false} 00:30:50.665 }, 00:30:50.665 "method": "bdev_nvme_attach_controller" 00:30:50.665 } 00:30:50.665 EOF 00:30:50.665 )") 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.665 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.666 { 00:30:50.666 "params": { 00:30:50.666 "name": "Nvme$subsystem", 00:30:50.666 "trtype": "$TEST_TRANSPORT", 00:30:50.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.666 "adrfam": "ipv4", 00:30:50.666 "trsvcid": "$NVMF_PORT", 00:30:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.666 "hdgst": ${hdgst:-false}, 00:30:50.666 "ddgst": ${ddgst:-false} 00:30:50.666 }, 00:30:50.666 "method": "bdev_nvme_attach_controller" 00:30:50.666 } 00:30:50.666 EOF 00:30:50.666 )") 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.666 "params": { 00:30:50.666 "name": "Nvme0", 00:30:50.666 "trtype": "tcp", 00:30:50.666 "traddr": "10.0.0.2", 00:30:50.666 "adrfam": "ipv4", 00:30:50.666 "trsvcid": "4420", 00:30:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.666 "hdgst": false, 00:30:50.666 "ddgst": false 00:30:50.666 }, 00:30:50.666 "method": "bdev_nvme_attach_controller" 00:30:50.666 },{ 00:30:50.666 "params": { 00:30:50.666 "name": "Nvme1", 00:30:50.666 "trtype": "tcp", 00:30:50.666 "traddr": "10.0.0.2", 00:30:50.666 "adrfam": "ipv4", 00:30:50.666 "trsvcid": "4420", 00:30:50.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.666 "hdgst": false, 00:30:50.666 "ddgst": false 00:30:50.666 }, 00:30:50.666 "method": "bdev_nvme_attach_controller" 00:30:50.666 }' 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:50.666 23:19:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.666 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:50.666 ... 00:30:50.666 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:50.666 ... 00:30:50.666 fio-3.35 00:30:50.666 Starting 4 threads 00:30:50.666 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.951 00:30:55.951 filename0: (groupid=0, jobs=1): err= 0: pid=1073436: Wed Jul 24 23:19:13 2024 00:30:55.951 read: IOPS=1995, BW=15.6MiB/s (16.3MB/s)(78.6MiB/5042msec) 00:30:55.951 slat (nsec): min=5373, max=51521, avg=6299.58, stdev=2008.77 00:30:55.951 clat (usec): min=1888, max=41551, avg=3971.96, stdev=963.78 00:30:55.951 lat (usec): min=1893, max=41558, avg=3978.26, stdev=963.79 00:30:55.951 clat percentiles (usec): 00:30:55.951 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3392], 00:30:55.951 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 3851], 60.00th=[ 4047], 00:30:55.951 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5276], 00:30:55.951 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7635], 99.95th=[ 8225], 00:30:55.951 | 99.99th=[41681] 00:30:55.951 bw ( KiB/s): min=15424, max=16576, per=24.29%, avg=16092.70, stdev=444.90, samples=10 00:30:55.951 iops : min= 1928, max= 2072, avg=2011.50, stdev=55.70, samples=10 00:30:55.951 lat (msec) : 2=0.02%, 4=58.12%, 10=41.83%, 50=0.03% 00:30:55.951 cpu : usr=93.47%, sys=4.21%, ctx=63, majf=0, minf=9 00:30:55.951 IO depths : 1=0.3%, 2=2.1%, 4=68.5%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.951 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.951 issued rwts: total=10059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.951 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:55.951 filename0: (groupid=0, jobs=1): err= 0: pid=1073437: Wed Jul 24 23:19:13 2024 00:30:55.951 read: IOPS=2392, BW=18.7MiB/s (19.6MB/s)(93.5MiB/5003msec) 00:30:55.951 slat (nsec): min=5372, max=36088, avg=7880.80, stdev=2014.39 00:30:55.951 clat (usec): min=894, max=6164, avg=3321.72, stdev=611.63 00:30:55.951 lat (usec): min=900, max=6172, avg=3329.60, stdev=611.69 00:30:55.951 clat percentiles (usec): 00:30:55.951 | 1.00th=[ 2008], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:30:55.951 | 30.00th=[ 2999], 40.00th=[ 3163], 50.00th=[ 3326], 60.00th=[ 3458], 00:30:55.951 | 70.00th=[ 3654], 80.00th=[ 3752], 90.00th=[ 4047], 95.00th=[ 4359], 00:30:55.951 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 5866], 99.95th=[ 5932], 00:30:55.951 | 99.99th=[ 6128] 00:30:55.951 bw ( KiB/s): min=18624, max=19728, per=28.90%, avg=19146.30, stdev=381.34, samples=10 00:30:55.951 iops : min= 2328, max= 2466, avg=2393.20, stdev=47.55, samples=10 00:30:55.951 lat (usec) : 1000=0.04% 00:30:55.951 lat (msec) : 2=0.94%, 4=88.05%, 10=10.97% 00:30:55.951 cpu : usr=97.04%, sys=2.66%, ctx=7, majf=0, minf=0 00:30:55.951 IO depths : 1=0.2%, 2=3.6%, 4=66.8%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 issued rwts: total=11969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:55.952 filename1: (groupid=0, jobs=1): err= 0: pid=1073439: Wed Jul 24 23:19:13 2024 00:30:55.952 read: IOPS=1924, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5004msec) 00:30:55.952 slat (nsec): min=7824, max=34845, avg=8806.01, stdev=2645.68 00:30:55.952 clat (usec): min=2231, max=7087, avg=4132.50, stdev=706.33 00:30:55.952 lat (usec): min=2243, max=7098, avg=4141.31, stdev=706.35 00:30:55.952 clat percentiles (usec): 00:30:55.952 | 1.00th=[ 2737], 5.00th=[ 3097], 10.00th=[ 3294], 20.00th=[ 3556], 00:30:55.952 | 30.00th=[ 3720], 40.00th=[ 3851], 50.00th=[ 4047], 60.00th=[ 4228], 00:30:55.952 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5407], 00:30:55.952 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6652], 99.95th=[ 6783], 00:30:55.952 | 99.99th=[ 7111] 00:30:55.952 bw ( KiB/s): min=14912, max=15776, per=23.24%, avg=15400.00, stdev=286.34, samples=10 00:30:55.952 iops : min= 1864, max= 1972, avg=1925.00, stdev=35.79, samples=10 00:30:55.952 lat (msec) : 4=48.35%, 10=51.65% 00:30:55.952 cpu : usr=96.74%, sys=2.98%, ctx=11, majf=0, minf=2 00:30:55.952 IO depths : 1=0.3%, 2=1.4%, 4=70.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 issued rwts: total=9628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:55.952 filename1: (groupid=0, jobs=1): err= 0: pid=1073440: Wed Jul 24 23:19:13 2024 00:30:55.952 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:30:55.952 slat (nsec): min=7831, max=33038, avg=8641.08, stdev=2321.65 00:30:55.952 clat (usec): min=1945, max=9489, avg=3935.55, stdev=696.30 00:30:55.952 lat (usec): min=1954, max=9513, avg=3944.19, stdev=696.30 00:30:55.952 clat percentiles (usec): 00:30:55.952 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3392], 00:30:55.952 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3982], 00:30:55.952 | 70.00th=[ 4178], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5211], 00:30:55.952 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 6915], 99.95th=[ 7767], 00:30:55.952 | 99.99th=[ 9372] 00:30:55.952 bw ( KiB/s): min=15872, max=16416, per=24.32%, avg=16115.56, stdev=175.41, samples=9 00:30:55.952 iops : min= 1984, max= 2052, avg=2014.44, stdev=21.93, samples=9 00:30:55.952 lat (msec) : 2=0.02%, 4=60.44%, 10=39.54% 00:30:55.952 cpu : usr=96.86%, sys=2.86%, ctx=7, majf=0, minf=9 00:30:55.952 IO depths : 1=0.4%, 2=1.4%, 4=71.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.952 issued rwts: total=10104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:55.952 00:30:55.952 Run status group 0 (all jobs): 00:30:55.952 READ: bw=64.7MiB/s (67.8MB/s), 15.0MiB/s-18.7MiB/s (15.8MB/s-19.6MB/s), io=326MiB (342MB), run=5002-5042msec 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 00:30:55.952 real 0m24.713s 00:30:55.952 user 5m19.447s 00:30:55.952 sys 0m4.266s 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 ************************************ 00:30:55.952 END TEST fio_dif_rand_params 00:30:55.952 ************************************ 00:30:55.952 23:19:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:55.952 23:19:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:55.952 23:19:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 ************************************ 00:30:55.952 START TEST fio_dif_digest 00:30:55.952 ************************************ 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 bdev_null0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:55.952 [2024-07-24 23:19:13.705261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.952 { 00:30:55.952 "params": { 00:30:55.952 "name": "Nvme$subsystem", 00:30:55.952 "trtype": "$TEST_TRANSPORT", 00:30:55.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.952 "adrfam": "ipv4", 00:30:55.952 "trsvcid": "$NVMF_PORT", 00:30:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.952 "hdgst": ${hdgst:-false}, 00:30:55.952 "ddgst": ${ddgst:-false} 00:30:55.952 }, 00:30:55.952 "method": "bdev_nvme_attach_controller" 00:30:55.952 } 00:30:55.952 EOF 00:30:55.952 )") 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:55.952 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:55.953 23:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:55.953 "params": { 00:30:55.953 "name": "Nvme0", 00:30:55.953 "trtype": "tcp", 00:30:55.953 "traddr": "10.0.0.2", 00:30:55.953 "adrfam": "ipv4", 00:30:55.953 "trsvcid": "4420", 00:30:55.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.953 "hdgst": true, 00:30:55.953 "ddgst": true 00:30:55.953 }, 00:30:55.953 "method": "bdev_nvme_attach_controller" 00:30:55.953 }' 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:56.236 23:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.501 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:56.501 ... 00:30:56.501 fio-3.35 00:30:56.501 Starting 3 threads 00:30:56.501 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.733 00:31:08.733 filename0: (groupid=0, jobs=1): err= 0: pid=1074737: Wed Jul 24 23:19:24 2024 00:31:08.733 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(279MiB/10048msec) 00:31:08.733 slat (nsec): min=5638, max=92009, avg=8429.99, stdev=2964.41 00:31:08.733 clat (usec): min=5621, max=58415, avg=13463.44, stdev=7927.22 00:31:08.733 lat (usec): min=5627, max=58421, avg=13471.87, stdev=7927.13 00:31:08.733 clat percentiles (usec): 00:31:08.733 | 1.00th=[ 6849], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:31:08.733 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[12780], 00:31:08.733 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14615], 95.00th=[15795], 00:31:08.733 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56361], 99.95th=[57410], 00:31:08.733 | 99.99th=[58459] 00:31:08.733 bw ( KiB/s): min=22528, max=36608, per=38.34%, avg=28569.60, stdev=4275.16, samples=20 00:31:08.733 iops : min= 176, max= 286, avg=223.20, stdev=33.40, samples=20 00:31:08.733 lat (msec) : 10=17.77%, 20=78.60%, 50=0.18%, 100=3.45% 00:31:08.733 cpu : usr=95.38%, sys=4.34%, ctx=27, majf=0, minf=210 00:31:08.733 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:08.734 filename0: (groupid=0, jobs=1): err= 0: pid=1074738: Wed Jul 24 23:19:24 2024 00:31:08.734 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10008msec) 00:31:08.734 slat (nsec): min=5753, max=37081, avg=7212.10, stdev=1709.05 00:31:08.734 clat (usec): min=7717, max=97314, avg=18074.92, stdev=12151.33 00:31:08.734 lat (usec): min=7723, max=97322, avg=18082.13, stdev=12151.31 00:31:08.734 clat percentiles (usec): 00:31:08.734 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11731], 20.00th=[12780], 00:31:08.734 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15008], 60.00th=[15533], 00:31:08.734 | 70.00th=[16057], 80.00th=[16712], 90.00th=[18220], 95.00th=[54789], 00:31:08.734 | 99.00th=[57410], 99.50th=[58459], 99.90th=[95945], 99.95th=[96994], 00:31:08.734 | 99.99th=[96994] 00:31:08.734 bw ( KiB/s): min=15360, max=25600, per=28.48%, avg=21222.40, stdev=2849.36, samples=20 00:31:08.734 iops : min= 120, max= 200, avg=165.80, stdev=22.26, samples=20 00:31:08.734 lat (msec) : 10=1.75%, 20=89.64%, 50=0.18%, 100=8.43% 00:31:08.734 cpu : usr=96.43%, sys=3.33%, ctx=24, majf=0, minf=137 00:31:08.734 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:08.734 filename0: (groupid=0, jobs=1): err= 0: pid=1074739: Wed Jul 24 23:19:24 2024 00:31:08.734 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10048msec) 00:31:08.734 slat (nsec): min=5656, max=90710, avg=7617.67, stdev=2534.57 00:31:08.734 clat (usec): min=7330, max=96655, avg=15385.09, stdev=9587.82 00:31:08.734 lat (usec): min=7336, max=96664, avg=15392.71, stdev=9587.94 00:31:08.734 clat percentiles (usec): 00:31:08.734 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:31:08.734 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13829], 60.00th=[14353], 00:31:08.734 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16450], 95.00th=[17695], 00:31:08.734 | 99.00th=[56361], 99.50th=[57934], 99.90th=[95945], 99.95th=[96994], 00:31:08.734 | 99.99th=[96994] 00:31:08.734 bw ( KiB/s): min=20992, max=29440, per=33.55%, avg=24998.40, stdev=2448.82, samples=20 00:31:08.734 iops : min= 164, max= 230, avg=195.30, stdev=19.13, samples=20 00:31:08.734 lat (msec) : 10=5.17%, 20=90.54%, 50=0.10%, 100=4.19% 00:31:08.734 cpu : usr=95.44%, sys=4.00%, ctx=326, majf=0, minf=185 00:31:08.734 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.734 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:08.734 00:31:08.734 Run status group 0 (all jobs): 00:31:08.734 READ: bw=72.8MiB/s (76.3MB/s), 20.7MiB/s-27.8MiB/s (21.7MB/s-29.1MB/s), io=731MiB (767MB), run=10008-10048msec 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.734 00:31:08.734 real 0m11.157s 00:31:08.734 user 0m45.730s 00:31:08.734 sys 0m1.475s 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:08.734 23:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:08.734 ************************************ 00:31:08.734 END TEST fio_dif_digest 00:31:08.734 ************************************ 00:31:08.734 23:19:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:08.734 23:19:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.734 rmmod nvme_tcp 00:31:08.734 rmmod nvme_fabrics 00:31:08.734 rmmod nvme_keyring 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1064263 ']' 00:31:08.734 23:19:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1064263 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1064263 ']' 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1064263 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1064263 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1064263' 00:31:08.734 killing process with pid 1064263 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1064263 00:31:08.734 23:19:24 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1064263 00:31:08.734 23:19:25 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:08.734 23:19:25 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:11.280 Waiting for block devices as requested 00:31:11.280 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:11.280 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:11.280 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:11.280 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:11.541 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:11.541 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:11.541 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:11.801 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:11.801 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:11.801 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:12.062 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:12.062 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:12.062 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:12.323 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:12.323 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:12.323 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:12.323 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:12.323 23:19:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:12.323 23:19:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:12.323 23:19:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.323 23:19:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:12.323 23:19:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.323 23:19:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:12.323 23:19:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.914 23:19:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.914 00:31:14.914 real 1m19.300s 00:31:14.914 user 8m13.512s 00:31:14.914 sys 0m20.922s 00:31:14.914 23:19:32 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.914 23:19:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.914 ************************************ 00:31:14.914 END TEST nvmf_dif 00:31:14.914 ************************************ 00:31:14.914 23:19:32 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:14.914 23:19:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:14.914 23:19:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.914 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:31:14.914 ************************************ 00:31:14.914 START TEST nvmf_abort_qd_sizes 00:31:14.914 ************************************ 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:14.914 * Looking for test storage... 00:31:14.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.914 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:14.915 23:19:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.055 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.056 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.056 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:23.056 23:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:23.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:31:23.056 00:31:23.056 --- 10.0.0.2 ping statistics --- 00:31:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.056 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:31:23.056 00:31:23.056 --- 10.0.0.1 ping statistics --- 00:31:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.056 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:23.056 23:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:26.358 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:26.358 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:26.618 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1085087 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1085087 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1085087 ']' 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.618 23:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:26.618 [2024-07-24 23:19:44.373843] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:31:26.618 [2024-07-24 23:19:44.373892] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.879 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.879 [2024-07-24 23:19:44.446384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.879 [2024-07-24 23:19:44.513759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.879 [2024-07-24 23:19:44.513794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.879 [2024-07-24 23:19:44.513802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.879 [2024-07-24 23:19:44.513808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.879 [2024-07-24 23:19:44.513814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.879 [2024-07-24 23:19:44.513949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.879 [2024-07-24 23:19:44.514069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.879 [2024-07-24 23:19:44.514224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.879 [2024-07-24 23:19:44.514226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.450 23:19:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:27.450 ************************************ 00:31:27.450 START TEST spdk_target_abort 00:31:27.450 ************************************ 00:31:27.450 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:27.450 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:27.450 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:27.450 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.450 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.032 spdk_targetn1 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.032 [2024-07-24 23:19:45.548810] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.032 [2024-07-24 23:19:45.589078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:28.032 23:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:28.032 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.032 [2024-07-24 23:19:45.699895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:64 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:28.032 [2024-07-24 23:19:45.699925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:31:28.032 [2024-07-24 23:19:45.715238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:528 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:28.032 [2024-07-24 23:19:45.715256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0044 p:1 m:0 dnr:0 00:31:28.032 [2024-07-24 23:19:45.763178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2096 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:28.032 [2024-07-24 23:19:45.763196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:31.335 Initializing NVMe Controllers 00:31:31.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:31.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:31.335 Initialization complete. Launching workers. 00:31:31.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11885, failed: 3 00:31:31.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 4346, failed to submit 7542 00:31:31.335 success 716, unsuccess 3630, failed 0 00:31:31.335 23:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:31.335 23:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:31.335 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.335 [2024-07-24 23:19:48.969901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2080 len:8 PRP1 0x200007c56000 PRP2 0x0 00:31:31.335 [2024-07-24 23:19:48.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:31.335 [2024-07-24 23:19:48.993875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2616 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:31.335 [2024-07-24 23:19:48.993898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:33.878 [2024-07-24 23:19:51.239908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:54080 len:8 PRP1 0x200007c62000 PRP2 0x0 00:31:33.878 [2024-07-24 23:19:51.239953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:31:34.448 Initializing NVMe Controllers 00:31:34.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:34.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:34.448 Initialization complete. Launching workers. 00:31:34.448 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8558, failed: 3 00:31:34.448 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 7357 00:31:34.448 success 342, unsuccess 862, failed 0 00:31:34.448 23:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.448 23:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.448 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.708 [2024-07-24 23:19:52.326357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:177 nsid:1 lba:2152 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:34.708 [2024-07-24 23:19:52.326385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:177 cdw0:0 sqhd:0083 p:0 m:0 dnr:0 00:31:38.007 Initializing NVMe Controllers 00:31:38.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.007 Initialization complete. Launching workers. 00:31:38.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42030, failed: 1 00:31:38.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2564, failed to submit 39467 00:31:38.007 success 609, unsuccess 1955, failed 0 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.007 23:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1085087 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1085087 ']' 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1085087 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1085087 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1085087' 00:31:39.920 killing process with pid 1085087 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1085087 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1085087 00:31:39.920 00:31:39.920 real 0m12.149s 00:31:39.920 user 0m49.159s 00:31:39.920 sys 0m2.020s 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.920 ************************************ 00:31:39.920 END TEST spdk_target_abort 00:31:39.920 ************************************ 00:31:39.920 23:19:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:39.920 23:19:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:39.920 23:19:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:39.920 23:19:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.920 ************************************ 00:31:39.920 START TEST kernel_target_abort 00:31:39.920 ************************************ 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:39.920 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:39.921 23:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:44.131 Waiting for block devices as requested 00:31:44.131 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:44.131 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:44.392 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:44.392 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:44.392 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:44.652 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:44.652 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:44.652 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:44.652 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:44.913 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:44.913 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:44.913 No valid GPT data, bailing 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:31:44.913 00:31:44.913 Discovery Log Number of Records 2, Generation counter 2 00:31:44.913 =====Discovery Log Entry 0====== 00:31:44.913 trtype: tcp 00:31:44.913 adrfam: ipv4 00:31:44.913 subtype: current discovery subsystem 00:31:44.913 treq: not specified, sq flow control disable supported 00:31:44.913 portid: 1 00:31:44.913 trsvcid: 4420 00:31:44.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:44.913 traddr: 10.0.0.1 00:31:44.913 eflags: none 00:31:44.913 sectype: none 00:31:44.913 =====Discovery Log Entry 1====== 00:31:44.913 trtype: tcp 00:31:44.913 adrfam: ipv4 00:31:44.913 subtype: nvme subsystem 00:31:44.913 treq: not specified, sq flow control disable supported 00:31:44.913 portid: 1 00:31:44.913 trsvcid: 4420 00:31:44.913 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:44.913 traddr: 10.0.0.1 00:31:44.913 eflags: none 00:31:44.913 sectype: none 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:44.913 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:44.914 23:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:45.174 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.549 Initializing NVMe Controllers 00:31:48.549 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:48.549 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:48.549 Initialization complete. Launching workers. 00:31:48.549 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52784, failed: 0 00:31:48.549 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52784, failed to submit 0 00:31:48.549 success 0, unsuccess 52784, failed 0 00:31:48.549 23:20:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:48.549 23:20:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.549 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.092 Initializing NVMe Controllers 00:31:51.093 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:51.093 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:51.093 Initialization complete. Launching workers. 00:31:51.093 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93259, failed: 0 00:31:51.093 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23510, failed to submit 69749 00:31:51.093 success 0, unsuccess 23510, failed 0 00:31:51.093 23:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:51.093 23:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:51.093 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.395 Initializing NVMe Controllers 00:31:54.395 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:54.395 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:54.395 Initialization complete. Launching workers. 00:31:54.395 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89966, failed: 0 00:31:54.395 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22446, failed to submit 67520 00:31:54.395 success 0, unsuccess 22446, failed 0 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:54.395 23:20:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.607 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:58.607 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:59.994 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:59.994 00:31:59.994 real 0m20.280s 00:31:59.994 user 0m8.638s 00:31:59.994 sys 0m6.567s 00:31:59.994 23:20:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:59.994 23:20:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.994 ************************************ 00:31:59.994 END TEST kernel_target_abort 00:31:59.994 ************************************ 00:32:00.255 23:20:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.256 rmmod nvme_tcp 00:32:00.256 rmmod nvme_fabrics 00:32:00.256 rmmod nvme_keyring 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1085087 ']' 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1085087 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1085087 ']' 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1085087 00:32:00.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1085087) - No such process 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1085087 is not found' 00:32:00.256 Process with pid 1085087 is not found 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:00.256 23:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:04.463 Waiting for block devices as requested 00:32:04.463 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:04.463 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:04.723 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:04.723 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:04.723 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:04.984 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:04.984 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:04.984 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:04.984 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:05.244 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:05.244 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:05.244 23:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.789 23:20:24 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:07.789 00:32:07.789 real 0m52.724s 00:32:07.789 user 1m3.369s 00:32:07.789 sys 0m19.918s 00:32:07.789 23:20:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.789 23:20:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.789 ************************************ 00:32:07.789 END TEST nvmf_abort_qd_sizes 00:32:07.789 ************************************ 00:32:07.789 23:20:25 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:07.789 23:20:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.789 23:20:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.789 23:20:25 -- common/autotest_common.sh@10 -- # set +x 00:32:07.789 ************************************ 00:32:07.789 START TEST keyring_file 00:32:07.789 ************************************ 00:32:07.789 23:20:25 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:07.789 * Looking for test storage... 00:32:07.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:07.789 23:20:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:07.789 23:20:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.789 23:20:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.790 23:20:25 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.790 23:20:25 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.790 23:20:25 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.790 23:20:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.790 23:20:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.790 23:20:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.790 23:20:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:07.790 23:20:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ghh9WTwv7J 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ghh9WTwv7J 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ghh9WTwv7J 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Ghh9WTwv7J 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.x7DzWcwgWE 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:07.790 23:20:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.x7DzWcwgWE 00:32:07.790 23:20:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.x7DzWcwgWE 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.x7DzWcwgWE 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=1095476 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1095476 00:32:07.790 23:20:25 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1095476 ']' 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.790 23:20:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:07.790 [2024-07-24 23:20:25.356764] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:07.790 [2024-07-24 23:20:25.356828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095476 ] 00:32:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.790 [2024-07-24 23:20:25.426906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.790 [2024-07-24 23:20:25.496140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.362 23:20:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.362 23:20:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:08.362 23:20:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:08.362 23:20:26 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.362 23:20:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:08.362 [2024-07-24 23:20:26.138139] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.623 null0 00:32:08.623 [2024-07-24 23:20:26.170187] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:08.623 [2024-07-24 23:20:26.170485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:08.623 [2024-07-24 23:20:26.178192] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.623 23:20:26 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.623 23:20:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:08.623 [2024-07-24 23:20:26.194232] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:08.623 request: 00:32:08.623 { 00:32:08.623 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.623 "secure_channel": false, 00:32:08.624 "listen_address": { 00:32:08.624 "trtype": "tcp", 00:32:08.624 "traddr": "127.0.0.1", 00:32:08.624 "trsvcid": "4420" 00:32:08.624 }, 00:32:08.624 "method": "nvmf_subsystem_add_listener", 00:32:08.624 "req_id": 1 00:32:08.624 } 00:32:08.624 Got JSON-RPC error response 00:32:08.624 response: 00:32:08.624 { 00:32:08.624 "code": -32602, 00:32:08.624 "message": "Invalid parameters" 00:32:08.624 } 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:08.624 23:20:26 keyring_file -- keyring/file.sh@46 -- # bperfpid=1095664 00:32:08.624 23:20:26 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1095664 /var/tmp/bperf.sock 00:32:08.624 23:20:26 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1095664 ']' 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.624 23:20:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:08.624 [2024-07-24 23:20:26.256640] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:08.624 [2024-07-24 23:20:26.256745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095664 ] 00:32:08.624 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.624 [2024-07-24 23:20:26.328273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.624 [2024-07-24 23:20:26.392164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.568 23:20:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:09.568 23:20:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:09.568 23:20:26 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:09.568 23:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:09.568 23:20:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.x7DzWcwgWE 00:32:09.568 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.x7DzWcwgWE 00:32:09.568 23:20:27 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:09.568 23:20:27 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:09.568 23:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.568 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.568 23:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:09.829 23:20:27 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Ghh9WTwv7J == \/\t\m\p\/\t\m\p\.\G\h\h\9\W\T\w\v\7\J ]] 00:32:09.829 23:20:27 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:09.829 23:20:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:09.829 23:20:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.x7DzWcwgWE == \/\t\m\p\/\t\m\p\.\x\7\D\z\W\c\w\g\W\E ]] 00:32:09.829 23:20:27 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.829 23:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:10.089 23:20:27 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:10.089 23:20:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:10.090 23:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:10.090 23:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:10.090 23:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:10.090 23:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.090 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.351 23:20:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:10.351 23:20:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:10.351 23:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:10.351 [2024-07-24 23:20:28.064702] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:10.612 nvme0n1 00:32:10.612 23:20:28 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.612 23:20:28 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:10.612 23:20:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.612 23:20:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:10.873 23:20:28 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:10.873 23:20:28 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.873 Running I/O for 1 seconds... 00:32:11.816 00:32:11.816 Latency(us) 00:32:11.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.816 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:11.816 nvme0n1 : 1.01 8289.39 32.38 0.00 0.00 15331.75 4669.44 19879.25 00:32:11.816 =================================================================================================================== 00:32:11.816 Total : 8289.39 32.38 0.00 0.00 15331.75 4669.44 19879.25 00:32:11.816 0 00:32:12.077 23:20:29 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:12.077 23:20:29 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:12.077 23:20:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.338 23:20:29 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:12.338 23:20:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:12.338 23:20:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:12.338 23:20:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.338 23:20:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.338 23:20:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.338 23:20:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:12.338 23:20:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:12.338 23:20:30 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.338 23:20:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.338 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.599 [2024-07-24 23:20:30.228555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:12.599 [2024-07-24 23:20:30.229525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1854020 (107): Transport endpoint is not connected 00:32:12.599 [2024-07-24 23:20:30.230520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1854020 (9): Bad file descriptor 00:32:12.599 [2024-07-24 23:20:30.231521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.599 [2024-07-24 23:20:30.231531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:12.599 [2024-07-24 23:20:30.231538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.599 request: 00:32:12.599 { 00:32:12.599 "name": "nvme0", 00:32:12.599 "trtype": "tcp", 00:32:12.600 "traddr": "127.0.0.1", 00:32:12.600 "adrfam": "ipv4", 00:32:12.600 "trsvcid": "4420", 00:32:12.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:12.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:12.600 "prchk_reftag": false, 00:32:12.600 "prchk_guard": false, 00:32:12.600 "hdgst": false, 00:32:12.600 "ddgst": false, 00:32:12.600 "psk": "key1", 00:32:12.600 "method": "bdev_nvme_attach_controller", 00:32:12.600 "req_id": 1 00:32:12.600 } 00:32:12.600 Got JSON-RPC error response 00:32:12.600 response: 00:32:12.600 { 00:32:12.600 "code": -5, 00:32:12.600 "message": "Input/output error" 00:32:12.600 } 00:32:12.600 23:20:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:12.600 23:20:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.600 23:20:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.600 23:20:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.600 23:20:30 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:12.600 23:20:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:12.600 23:20:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.600 23:20:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.600 23:20:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:12.600 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.861 23:20:30 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:12.861 23:20:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.861 23:20:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:12.861 23:20:30 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:12.861 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:13.122 23:20:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:13.122 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:13.122 23:20:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:13.122 23:20:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:13.122 23:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.383 23:20:31 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:13.383 23:20:31 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Ghh9WTwv7J 00:32:13.383 23:20:31 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.383 23:20:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.383 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.383 [2024-07-24 23:20:31.159034] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ghh9WTwv7J': 0100660 00:32:13.383 [2024-07-24 23:20:31.159055] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:13.383 request: 00:32:13.383 { 00:32:13.383 "name": "key0", 00:32:13.383 "path": "/tmp/tmp.Ghh9WTwv7J", 00:32:13.383 "method": "keyring_file_add_key", 00:32:13.383 "req_id": 1 00:32:13.383 } 00:32:13.383 Got JSON-RPC error response 00:32:13.383 response: 00:32:13.383 { 00:32:13.383 "code": -1, 00:32:13.383 "message": "Operation not permitted" 00:32:13.383 } 00:32:13.644 23:20:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:13.644 23:20:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.644 23:20:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.644 23:20:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.644 23:20:31 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Ghh9WTwv7J 00:32:13.644 23:20:31 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ghh9WTwv7J 00:32:13.644 23:20:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Ghh9WTwv7J 00:32:13.644 23:20:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.644 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.905 23:20:31 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:13.905 23:20:31 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.905 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.905 [2024-07-24 23:20:31.644266] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Ghh9WTwv7J': No such file or directory 00:32:13.905 [2024-07-24 23:20:31.644283] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:13.905 [2024-07-24 23:20:31.644299] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:13.905 [2024-07-24 23:20:31.644304] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:13.905 [2024-07-24 23:20:31.644308] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:13.905 request: 00:32:13.905 { 00:32:13.905 "name": "nvme0", 00:32:13.905 "trtype": "tcp", 00:32:13.905 "traddr": "127.0.0.1", 00:32:13.905 "adrfam": "ipv4", 00:32:13.905 "trsvcid": "4420", 00:32:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.905 "prchk_reftag": false, 00:32:13.905 "prchk_guard": false, 00:32:13.905 "hdgst": false, 00:32:13.905 "ddgst": false, 00:32:13.905 "psk": "key0", 00:32:13.905 "method": "bdev_nvme_attach_controller", 00:32:13.905 "req_id": 1 00:32:13.905 } 00:32:13.905 Got JSON-RPC error response 00:32:13.905 response: 00:32:13.905 { 00:32:13.905 "code": -19, 00:32:13.905 "message": "No such device" 00:32:13.905 } 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.905 23:20:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.905 23:20:31 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:13.905 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:14.166 23:20:31 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.35LoyOPWoG 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:14.166 23:20:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.35LoyOPWoG 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.35LoyOPWoG 00:32:14.166 23:20:31 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.35LoyOPWoG 00:32:14.166 23:20:31 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.35LoyOPWoG 00:32:14.166 23:20:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.35LoyOPWoG 00:32:14.427 23:20:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:14.427 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:14.427 nvme0n1 00:32:14.688 23:20:32 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.688 23:20:32 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:14.688 23:20:32 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:14.688 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:14.949 23:20:32 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:14.949 23:20:32 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.949 23:20:32 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:14.949 23:20:32 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.949 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.210 23:20:32 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:15.210 23:20:32 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:15.210 23:20:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:15.518 23:20:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:15.518 23:20:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.518 23:20:33 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:15.518 23:20:33 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:15.518 23:20:33 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.35LoyOPWoG 00:32:15.518 23:20:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.35LoyOPWoG 00:32:15.796 23:20:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.x7DzWcwgWE 00:32:15.796 23:20:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.x7DzWcwgWE 00:32:15.796 23:20:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.796 23:20:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.056 nvme0n1 00:32:16.056 23:20:33 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:16.056 23:20:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:16.317 23:20:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:16.317 "subsystems": [ 00:32:16.317 { 00:32:16.317 "subsystem": "keyring", 00:32:16.317 "config": [ 00:32:16.317 { 00:32:16.317 "method": "keyring_file_add_key", 00:32:16.317 "params": { 00:32:16.317 "name": "key0", 00:32:16.318 "path": "/tmp/tmp.35LoyOPWoG" 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "keyring_file_add_key", 00:32:16.318 "params": { 00:32:16.318 "name": "key1", 00:32:16.318 "path": "/tmp/tmp.x7DzWcwgWE" 00:32:16.318 } 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "iobuf", 00:32:16.318 "config": [ 00:32:16.318 { 00:32:16.318 "method": "iobuf_set_options", 00:32:16.318 "params": { 00:32:16.318 "small_pool_count": 8192, 00:32:16.318 "large_pool_count": 1024, 00:32:16.318 "small_bufsize": 8192, 00:32:16.318 "large_bufsize": 135168 00:32:16.318 } 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "sock", 00:32:16.318 "config": [ 00:32:16.318 { 00:32:16.318 "method": "sock_set_default_impl", 00:32:16.318 "params": { 00:32:16.318 "impl_name": "posix" 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "sock_impl_set_options", 00:32:16.318 "params": { 00:32:16.318 "impl_name": "ssl", 00:32:16.318 "recv_buf_size": 4096, 00:32:16.318 "send_buf_size": 4096, 00:32:16.318 "enable_recv_pipe": true, 00:32:16.318 "enable_quickack": false, 00:32:16.318 "enable_placement_id": 0, 00:32:16.318 "enable_zerocopy_send_server": true, 00:32:16.318 "enable_zerocopy_send_client": false, 00:32:16.318 "zerocopy_threshold": 0, 00:32:16.318 "tls_version": 0, 00:32:16.318 "enable_ktls": false 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "sock_impl_set_options", 00:32:16.318 "params": { 00:32:16.318 "impl_name": "posix", 00:32:16.318 "recv_buf_size": 2097152, 00:32:16.318 "send_buf_size": 2097152, 00:32:16.318 "enable_recv_pipe": true, 00:32:16.318 "enable_quickack": false, 00:32:16.318 "enable_placement_id": 0, 00:32:16.318 "enable_zerocopy_send_server": true, 00:32:16.318 "enable_zerocopy_send_client": false, 00:32:16.318 "zerocopy_threshold": 0, 00:32:16.318 "tls_version": 0, 00:32:16.318 "enable_ktls": false 00:32:16.318 } 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "vmd", 00:32:16.318 "config": [] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "accel", 00:32:16.318 "config": [ 00:32:16.318 { 00:32:16.318 "method": "accel_set_options", 00:32:16.318 "params": { 00:32:16.318 "small_cache_size": 128, 00:32:16.318 "large_cache_size": 16, 00:32:16.318 "task_count": 2048, 00:32:16.318 "sequence_count": 2048, 00:32:16.318 "buf_count": 2048 00:32:16.318 } 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "bdev", 00:32:16.318 "config": [ 00:32:16.318 { 00:32:16.318 "method": "bdev_set_options", 00:32:16.318 "params": { 00:32:16.318 "bdev_io_pool_size": 65535, 00:32:16.318 "bdev_io_cache_size": 256, 00:32:16.318 "bdev_auto_examine": true, 00:32:16.318 "iobuf_small_cache_size": 128, 00:32:16.318 "iobuf_large_cache_size": 16 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_raid_set_options", 00:32:16.318 "params": { 00:32:16.318 "process_window_size_kb": 1024, 00:32:16.318 "process_max_bandwidth_mb_sec": 0 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_iscsi_set_options", 00:32:16.318 "params": { 00:32:16.318 "timeout_sec": 30 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_nvme_set_options", 00:32:16.318 "params": { 00:32:16.318 "action_on_timeout": "none", 00:32:16.318 "timeout_us": 0, 00:32:16.318 "timeout_admin_us": 0, 00:32:16.318 "keep_alive_timeout_ms": 10000, 00:32:16.318 "arbitration_burst": 0, 00:32:16.318 "low_priority_weight": 0, 00:32:16.318 "medium_priority_weight": 0, 00:32:16.318 "high_priority_weight": 0, 00:32:16.318 "nvme_adminq_poll_period_us": 10000, 00:32:16.318 "nvme_ioq_poll_period_us": 0, 00:32:16.318 "io_queue_requests": 512, 00:32:16.318 "delay_cmd_submit": true, 00:32:16.318 "transport_retry_count": 4, 00:32:16.318 "bdev_retry_count": 3, 00:32:16.318 "transport_ack_timeout": 0, 00:32:16.318 "ctrlr_loss_timeout_sec": 0, 00:32:16.318 "reconnect_delay_sec": 0, 00:32:16.318 "fast_io_fail_timeout_sec": 0, 00:32:16.318 "disable_auto_failback": false, 00:32:16.318 "generate_uuids": false, 00:32:16.318 "transport_tos": 0, 00:32:16.318 "nvme_error_stat": false, 00:32:16.318 "rdma_srq_size": 0, 00:32:16.318 "io_path_stat": false, 00:32:16.318 "allow_accel_sequence": false, 00:32:16.318 "rdma_max_cq_size": 0, 00:32:16.318 "rdma_cm_event_timeout_ms": 0, 00:32:16.318 "dhchap_digests": [ 00:32:16.318 "sha256", 00:32:16.318 "sha384", 00:32:16.318 "sha512" 00:32:16.318 ], 00:32:16.318 "dhchap_dhgroups": [ 00:32:16.318 "null", 00:32:16.318 "ffdhe2048", 00:32:16.318 "ffdhe3072", 00:32:16.318 "ffdhe4096", 00:32:16.318 "ffdhe6144", 00:32:16.318 "ffdhe8192" 00:32:16.318 ] 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_nvme_attach_controller", 00:32:16.318 "params": { 00:32:16.318 "name": "nvme0", 00:32:16.318 "trtype": "TCP", 00:32:16.318 "adrfam": "IPv4", 00:32:16.318 "traddr": "127.0.0.1", 00:32:16.318 "trsvcid": "4420", 00:32:16.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.318 "prchk_reftag": false, 00:32:16.318 "prchk_guard": false, 00:32:16.318 "ctrlr_loss_timeout_sec": 0, 00:32:16.318 "reconnect_delay_sec": 0, 00:32:16.318 "fast_io_fail_timeout_sec": 0, 00:32:16.318 "psk": "key0", 00:32:16.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.318 "hdgst": false, 00:32:16.318 "ddgst": false 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_nvme_set_hotplug", 00:32:16.318 "params": { 00:32:16.318 "period_us": 100000, 00:32:16.318 "enable": false 00:32:16.318 } 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "method": "bdev_wait_for_examine" 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }, 00:32:16.318 { 00:32:16.318 "subsystem": "nbd", 00:32:16.318 "config": [] 00:32:16.318 } 00:32:16.318 ] 00:32:16.318 }' 00:32:16.318 23:20:33 keyring_file -- keyring/file.sh@114 -- # killprocess 1095664 00:32:16.318 23:20:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1095664 ']' 00:32:16.318 23:20:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1095664 00:32:16.318 23:20:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:16.318 23:20:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.318 23:20:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1095664 00:32:16.318 23:20:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:16.318 23:20:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:16.318 23:20:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1095664' 00:32:16.318 killing process with pid 1095664 00:32:16.318 23:20:34 keyring_file -- common/autotest_common.sh@969 -- # kill 1095664 00:32:16.318 Received shutdown signal, test time was about 1.000000 seconds 00:32:16.318 00:32:16.318 Latency(us) 00:32:16.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.318 =================================================================================================================== 00:32:16.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.318 23:20:34 keyring_file -- common/autotest_common.sh@974 -- # wait 1095664 00:32:16.581 23:20:34 keyring_file -- keyring/file.sh@117 -- # bperfpid=1097240 00:32:16.581 23:20:34 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1097240 /var/tmp/bperf.sock 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1097240 ']' 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:16.581 23:20:34 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:16.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:16.581 23:20:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:16.581 23:20:34 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:16.581 "subsystems": [ 00:32:16.581 { 00:32:16.581 "subsystem": "keyring", 00:32:16.581 "config": [ 00:32:16.581 { 00:32:16.581 "method": "keyring_file_add_key", 00:32:16.581 "params": { 00:32:16.581 "name": "key0", 00:32:16.581 "path": "/tmp/tmp.35LoyOPWoG" 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "keyring_file_add_key", 00:32:16.581 "params": { 00:32:16.581 "name": "key1", 00:32:16.581 "path": "/tmp/tmp.x7DzWcwgWE" 00:32:16.581 } 00:32:16.581 } 00:32:16.581 ] 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "subsystem": "iobuf", 00:32:16.581 "config": [ 00:32:16.581 { 00:32:16.581 "method": "iobuf_set_options", 00:32:16.581 "params": { 00:32:16.581 "small_pool_count": 8192, 00:32:16.581 "large_pool_count": 1024, 00:32:16.581 "small_bufsize": 8192, 00:32:16.581 "large_bufsize": 135168 00:32:16.581 } 00:32:16.581 } 00:32:16.581 ] 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "subsystem": "sock", 00:32:16.581 "config": [ 00:32:16.581 { 00:32:16.581 "method": "sock_set_default_impl", 00:32:16.581 "params": { 00:32:16.581 "impl_name": "posix" 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "sock_impl_set_options", 00:32:16.581 "params": { 00:32:16.581 "impl_name": "ssl", 00:32:16.581 "recv_buf_size": 4096, 00:32:16.581 "send_buf_size": 4096, 00:32:16.581 "enable_recv_pipe": true, 00:32:16.581 "enable_quickack": false, 00:32:16.581 "enable_placement_id": 0, 00:32:16.581 "enable_zerocopy_send_server": true, 00:32:16.581 "enable_zerocopy_send_client": false, 00:32:16.581 "zerocopy_threshold": 0, 00:32:16.581 "tls_version": 0, 00:32:16.581 "enable_ktls": false 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "sock_impl_set_options", 00:32:16.581 "params": { 00:32:16.581 "impl_name": "posix", 00:32:16.581 "recv_buf_size": 2097152, 00:32:16.581 "send_buf_size": 2097152, 00:32:16.581 "enable_recv_pipe": true, 00:32:16.581 "enable_quickack": false, 00:32:16.581 "enable_placement_id": 0, 00:32:16.581 "enable_zerocopy_send_server": true, 00:32:16.581 "enable_zerocopy_send_client": false, 00:32:16.581 "zerocopy_threshold": 0, 00:32:16.581 "tls_version": 0, 00:32:16.581 "enable_ktls": false 00:32:16.581 } 00:32:16.581 } 00:32:16.581 ] 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "subsystem": "vmd", 00:32:16.581 "config": [] 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "subsystem": "accel", 00:32:16.581 "config": [ 00:32:16.581 { 00:32:16.581 "method": "accel_set_options", 00:32:16.581 "params": { 00:32:16.581 "small_cache_size": 128, 00:32:16.581 "large_cache_size": 16, 00:32:16.581 "task_count": 2048, 00:32:16.581 "sequence_count": 2048, 00:32:16.581 "buf_count": 2048 00:32:16.581 } 00:32:16.581 } 00:32:16.581 ] 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "subsystem": "bdev", 00:32:16.581 "config": [ 00:32:16.581 { 00:32:16.581 "method": "bdev_set_options", 00:32:16.581 "params": { 00:32:16.581 "bdev_io_pool_size": 65535, 00:32:16.581 "bdev_io_cache_size": 256, 00:32:16.581 "bdev_auto_examine": true, 00:32:16.581 "iobuf_small_cache_size": 128, 00:32:16.581 "iobuf_large_cache_size": 16 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "bdev_raid_set_options", 00:32:16.581 "params": { 00:32:16.581 "process_window_size_kb": 1024, 00:32:16.581 "process_max_bandwidth_mb_sec": 0 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "bdev_iscsi_set_options", 00:32:16.581 "params": { 00:32:16.581 "timeout_sec": 30 00:32:16.581 } 00:32:16.581 }, 00:32:16.581 { 00:32:16.581 "method": "bdev_nvme_set_options", 00:32:16.581 "params": { 00:32:16.581 "action_on_timeout": "none", 00:32:16.581 "timeout_us": 0, 00:32:16.581 "timeout_admin_us": 0, 00:32:16.581 "keep_alive_timeout_ms": 10000, 00:32:16.581 "arbitration_burst": 0, 00:32:16.581 "low_priority_weight": 0, 00:32:16.581 "medium_priority_weight": 0, 00:32:16.581 "high_priority_weight": 0, 00:32:16.581 "nvme_adminq_poll_period_us": 10000, 00:32:16.581 "nvme_ioq_poll_period_us": 0, 00:32:16.581 "io_queue_requests": 512, 00:32:16.581 "delay_cmd_submit": true, 00:32:16.581 "transport_retry_count": 4, 00:32:16.581 "bdev_retry_count": 3, 00:32:16.581 "transport_ack_timeout": 0, 00:32:16.581 "ctrlr_loss_timeout_sec": 0, 00:32:16.581 "reconnect_delay_sec": 0, 00:32:16.581 "fast_io_fail_timeout_sec": 0, 00:32:16.581 "disable_auto_failback": false, 00:32:16.581 "generate_uuids": false, 00:32:16.581 "transport_tos": 0, 00:32:16.581 "nvme_error_stat": false, 00:32:16.581 "rdma_srq_size": 0, 00:32:16.581 "io_path_stat": false, 00:32:16.581 "allow_accel_sequence": false, 00:32:16.581 "rdma_max_cq_size": 0, 00:32:16.581 "rdma_cm_event_timeout_ms": 0, 00:32:16.581 "dhchap_digests": [ 00:32:16.581 "sha256", 00:32:16.582 "sha384", 00:32:16.582 "sha512" 00:32:16.582 ], 00:32:16.582 "dhchap_dhgroups": [ 00:32:16.582 "null", 00:32:16.582 "ffdhe2048", 00:32:16.582 "ffdhe3072", 00:32:16.582 "ffdhe4096", 00:32:16.582 "ffdhe6144", 00:32:16.582 "ffdhe8192" 00:32:16.582 ] 00:32:16.582 } 00:32:16.582 }, 00:32:16.582 { 00:32:16.582 "method": "bdev_nvme_attach_controller", 00:32:16.582 "params": { 00:32:16.582 "name": "nvme0", 00:32:16.582 "trtype": "TCP", 00:32:16.582 "adrfam": "IPv4", 00:32:16.582 "traddr": "127.0.0.1", 00:32:16.582 "trsvcid": "4420", 00:32:16.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.582 "prchk_reftag": false, 00:32:16.582 "prchk_guard": false, 00:32:16.582 "ctrlr_loss_timeout_sec": 0, 00:32:16.582 "reconnect_delay_sec": 0, 00:32:16.582 "fast_io_fail_timeout_sec": 0, 00:32:16.582 "psk": "key0", 00:32:16.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.582 "hdgst": false, 00:32:16.582 "ddgst": false 00:32:16.582 } 00:32:16.582 }, 00:32:16.582 { 00:32:16.582 "method": "bdev_nvme_set_hotplug", 00:32:16.582 "params": { 00:32:16.582 "period_us": 100000, 00:32:16.582 "enable": false 00:32:16.582 } 00:32:16.582 }, 00:32:16.582 { 00:32:16.582 "method": "bdev_wait_for_examine" 00:32:16.582 } 00:32:16.582 ] 00:32:16.582 }, 00:32:16.582 { 00:32:16.582 "subsystem": "nbd", 00:32:16.582 "config": [] 00:32:16.582 } 00:32:16.582 ] 00:32:16.582 }' 00:32:16.582 [2024-07-24 23:20:34.196560] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:16.582 [2024-07-24 23:20:34.196616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097240 ] 00:32:16.582 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.582 [2024-07-24 23:20:34.277522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.582 [2024-07-24 23:20:34.331175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.843 [2024-07-24 23:20:34.472375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:17.414 23:20:34 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:17.414 23:20:34 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:17.414 23:20:34 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:17.414 23:20:34 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:17.414 23:20:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.414 23:20:35 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:17.414 23:20:35 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:17.414 23:20:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.414 23:20:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.414 23:20:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.414 23:20:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.415 23:20:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.675 23:20:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:17.675 23:20:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.675 23:20:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:17.675 23:20:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:17.675 23:20:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:17.675 23:20:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:17.936 23:20:35 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:17.936 23:20:35 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:17.936 23:20:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.35LoyOPWoG /tmp/tmp.x7DzWcwgWE 00:32:17.936 23:20:35 keyring_file -- keyring/file.sh@20 -- # killprocess 1097240 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1097240 ']' 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1097240 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1097240 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:17.936 23:20:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:17.937 23:20:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1097240' 00:32:17.937 killing process with pid 1097240 00:32:17.937 23:20:35 keyring_file -- common/autotest_common.sh@969 -- # kill 1097240 00:32:17.937 Received shutdown signal, test time was about 1.000000 seconds 00:32:17.937 00:32:17.937 Latency(us) 00:32:17.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.937 =================================================================================================================== 00:32:17.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:17.937 23:20:35 keyring_file -- common/autotest_common.sh@974 -- # wait 1097240 00:32:18.197 23:20:35 keyring_file -- keyring/file.sh@21 -- # killprocess 1095476 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1095476 ']' 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1095476 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1095476 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1095476' 00:32:18.197 killing process with pid 1095476 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@969 -- # kill 1095476 00:32:18.197 [2024-07-24 23:20:35.821224] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:18.197 23:20:35 keyring_file -- common/autotest_common.sh@974 -- # wait 1095476 00:32:18.459 00:32:18.459 real 0m10.988s 00:32:18.459 user 0m25.710s 00:32:18.459 sys 0m2.530s 00:32:18.459 23:20:36 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:18.459 23:20:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 ************************************ 00:32:18.459 END TEST keyring_file 00:32:18.459 ************************************ 00:32:18.459 23:20:36 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:18.459 23:20:36 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:18.459 23:20:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:18.459 23:20:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.459 23:20:36 -- common/autotest_common.sh@10 -- # set +x 00:32:18.459 ************************************ 00:32:18.459 START TEST keyring_linux 00:32:18.459 ************************************ 00:32:18.459 23:20:36 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:18.459 * Looking for test storage... 00:32:18.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:18.459 23:20:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:18.459 23:20:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.459 23:20:36 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.459 23:20:36 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.459 23:20:36 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.459 23:20:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.459 23:20:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.459 23:20:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.459 23:20:36 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:18.459 23:20:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.459 23:20:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.460 23:20:36 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:18.460 23:20:36 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:18.460 23:20:36 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:18.460 23:20:36 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:18.460 23:20:36 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:18.460 23:20:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:18.460 23:20:36 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:18.460 23:20:36 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:18.722 /tmp/:spdk-test:key0 00:32:18.722 23:20:36 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:18.722 23:20:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:18.722 23:20:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:18.722 /tmp/:spdk-test:key1 00:32:18.722 23:20:36 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:18.722 23:20:36 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1097828 00:32:18.722 23:20:36 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1097828 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1097828 ']' 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:18.722 23:20:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:18.722 [2024-07-24 23:20:36.385917] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:18.722 [2024-07-24 23:20:36.385991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097828 ] 00:32:18.722 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.722 [2024-07-24 23:20:36.460043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.983 [2024-07-24 23:20:36.536734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.553 23:20:37 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.553 23:20:37 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:19.553 23:20:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:19.553 23:20:37 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.553 23:20:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:19.554 [2024-07-24 23:20:37.187218] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.554 null0 00:32:19.554 [2024-07-24 23:20:37.219268] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:19.554 [2024-07-24 23:20:37.219841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.554 23:20:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:19.554 856564453 00:32:19.554 23:20:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:19.554 560750497 00:32:19.554 23:20:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1097928 00:32:19.554 23:20:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1097928 /var/tmp/bperf.sock 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1097928 ']' 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.554 23:20:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:19.554 23:20:37 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:19.554 [2024-07-24 23:20:37.290829] Starting SPDK v24.09-pre git sha1 415e0bb41 / DPDK 24.03.0 initialization... 00:32:19.554 [2024-07-24 23:20:37.290880] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097928 ] 00:32:19.554 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.814 [2024-07-24 23:20:37.371157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.814 [2024-07-24 23:20:37.424649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.385 23:20:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.385 23:20:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:20.385 23:20:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:20.386 23:20:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:20.646 23:20:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:20.646 23:20:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:20.646 23:20:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:20.646 23:20:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:20.907 [2024-07-24 23:20:38.527025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:20.907 nvme0n1 00:32:20.907 23:20:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:20.907 23:20:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:20.907 23:20:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:20.907 23:20:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:20.907 23:20:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:20.907 23:20:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:21.167 23:20:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.167 23:20:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:21.167 23:20:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@25 -- # sn=856564453 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 856564453 == \8\5\6\5\6\4\4\5\3 ]] 00:32:21.167 23:20:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 856564453 00:32:21.428 23:20:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:21.428 23:20:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.428 Running I/O for 1 seconds... 00:32:22.371 00:32:22.371 Latency(us) 00:32:22.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.371 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:22.371 nvme0n1 : 1.02 8194.03 32.01 0.00 0.00 15496.17 3413.33 16820.91 00:32:22.371 =================================================================================================================== 00:32:22.371 Total : 8194.03 32.01 0.00 0.00 15496.17 3413.33 16820.91 00:32:22.371 0 00:32:22.371 23:20:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:22.371 23:20:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:22.632 23:20:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:22.632 23:20:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.632 23:20:40 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:22.632 23:20:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:22.893 [2024-07-24 23:20:40.523596] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:22.893 [2024-07-24 23:20:40.524316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x922000 (107): Transport endpoint is not connected 00:32:22.893 [2024-07-24 23:20:40.525311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x922000 (9): Bad file descriptor 00:32:22.893 [2024-07-24 23:20:40.526312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.893 [2024-07-24 23:20:40.526323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:22.893 [2024-07-24 23:20:40.526328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.893 request: 00:32:22.893 { 00:32:22.893 "name": "nvme0", 00:32:22.893 "trtype": "tcp", 00:32:22.893 "traddr": "127.0.0.1", 00:32:22.893 "adrfam": "ipv4", 00:32:22.893 "trsvcid": "4420", 00:32:22.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.893 "prchk_reftag": false, 00:32:22.893 "prchk_guard": false, 00:32:22.893 "hdgst": false, 00:32:22.893 "ddgst": false, 00:32:22.893 "psk": ":spdk-test:key1", 00:32:22.893 "method": "bdev_nvme_attach_controller", 00:32:22.893 "req_id": 1 00:32:22.893 } 00:32:22.893 Got JSON-RPC error response 00:32:22.893 response: 00:32:22.893 { 00:32:22.893 "code": -5, 00:32:22.893 "message": "Input/output error" 00:32:22.893 } 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@33 -- # sn=856564453 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 856564453 00:32:22.893 1 links removed 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@33 -- # sn=560750497 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 560750497 00:32:22.893 1 links removed 00:32:22.893 23:20:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1097928 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1097928 ']' 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1097928 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1097928 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1097928' 00:32:22.893 killing process with pid 1097928 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 1097928 00:32:22.893 Received shutdown signal, test time was about 1.000000 seconds 00:32:22.893 00:32:22.893 Latency(us) 00:32:22.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.893 =================================================================================================================== 00:32:22.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.893 23:20:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 1097928 00:32:23.153 23:20:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1097828 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1097828 ']' 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1097828 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1097828 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.153 23:20:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1097828' 00:32:23.154 killing process with pid 1097828 00:32:23.154 23:20:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 1097828 00:32:23.154 23:20:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 1097828 00:32:23.415 00:32:23.415 real 0m4.869s 00:32:23.415 user 0m8.305s 00:32:23.415 sys 0m1.398s 00:32:23.415 23:20:40 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:23.415 23:20:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:23.415 ************************************ 00:32:23.415 END TEST keyring_linux 00:32:23.415 ************************************ 00:32:23.415 23:20:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:23.415 23:20:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:23.415 23:20:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:23.415 23:20:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:23.415 23:20:41 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:23.415 23:20:41 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:23.415 23:20:41 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:23.415 23:20:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.415 23:20:41 -- common/autotest_common.sh@10 -- # set +x 00:32:23.415 23:20:41 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:23.415 23:20:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:23.415 23:20:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:23.415 23:20:41 -- common/autotest_common.sh@10 -- # set +x 00:32:31.555 INFO: APP EXITING 00:32:31.555 INFO: killing all VMs 00:32:31.555 INFO: killing vhost app 00:32:31.555 INFO: EXIT DONE 00:32:34.101 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:34.101 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:34.361 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:34.361 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:34.622 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:34.622 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:34.622 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:38.825 Cleaning 00:32:38.825 Removing: /var/run/dpdk/spdk0/config 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:38.825 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:38.825 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:38.825 Removing: /var/run/dpdk/spdk1/config 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:38.825 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:38.825 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:38.825 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:38.825 Removing: /var/run/dpdk/spdk2/config 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:38.825 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:38.825 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:38.825 Removing: /var/run/dpdk/spdk3/config 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:38.825 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:38.825 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:38.825 Removing: /var/run/dpdk/spdk4/config 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:38.825 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:38.825 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:38.825 Removing: /dev/shm/bdev_svc_trace.1 00:32:38.825 Removing: /dev/shm/nvmf_trace.0 00:32:38.825 Removing: /dev/shm/spdk_tgt_trace.pid618535 00:32:38.825 Removing: /var/run/dpdk/spdk0 00:32:38.825 Removing: /var/run/dpdk/spdk1 00:32:38.825 Removing: /var/run/dpdk/spdk2 00:32:38.825 Removing: /var/run/dpdk/spdk3 00:32:38.825 Removing: /var/run/dpdk/spdk4 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1003561 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1003580 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1027819 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1028501 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1029213 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1029999 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1030951 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1031777 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1032596 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1033297 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1038872 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1039098 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1046744 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1047121 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1049633 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1057841 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1057862 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1064631 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1066838 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1069324 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1070536 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1073062 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1074580 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1085279 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1085799 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1086464 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1089524 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1090189 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1090632 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1095476 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1095664 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1097240 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1097828 00:32:38.825 Removing: /var/run/dpdk/spdk_pid1097928 00:32:38.825 Removing: /var/run/dpdk/spdk_pid617011 00:32:38.825 Removing: /var/run/dpdk/spdk_pid618535 00:32:38.825 Removing: /var/run/dpdk/spdk_pid619144 00:32:38.825 Removing: /var/run/dpdk/spdk_pid620355 00:32:38.825 Removing: /var/run/dpdk/spdk_pid620486 00:32:38.825 Removing: /var/run/dpdk/spdk_pid621810 00:32:38.825 Removing: /var/run/dpdk/spdk_pid621837 00:32:38.825 Removing: /var/run/dpdk/spdk_pid622274 00:32:38.825 Removing: /var/run/dpdk/spdk_pid623224 00:32:38.825 Removing: /var/run/dpdk/spdk_pid623864 00:32:38.825 Removing: /var/run/dpdk/spdk_pid624247 00:32:38.825 Removing: /var/run/dpdk/spdk_pid624636 00:32:38.825 Removing: /var/run/dpdk/spdk_pid624943 00:32:38.825 Removing: /var/run/dpdk/spdk_pid625169 00:32:38.825 Removing: /var/run/dpdk/spdk_pid625467 00:32:38.825 Removing: /var/run/dpdk/spdk_pid625824 00:32:38.825 Removing: /var/run/dpdk/spdk_pid626200 00:32:38.825 Removing: /var/run/dpdk/spdk_pid627268 00:32:38.825 Removing: /var/run/dpdk/spdk_pid630641 00:32:38.825 Removing: /var/run/dpdk/spdk_pid631012 00:32:38.825 Removing: /var/run/dpdk/spdk_pid631319 00:32:38.825 Removing: /var/run/dpdk/spdk_pid631582 00:32:38.825 Removing: /var/run/dpdk/spdk_pid632019 00:32:38.825 Removing: /var/run/dpdk/spdk_pid632353 00:32:38.825 Removing: /var/run/dpdk/spdk_pid632775 00:32:38.825 Removing: /var/run/dpdk/spdk_pid632820 00:32:38.825 Removing: /var/run/dpdk/spdk_pid633150 00:32:38.825 Removing: /var/run/dpdk/spdk_pid633478 00:32:38.825 Removing: /var/run/dpdk/spdk_pid633525 00:32:38.825 Removing: /var/run/dpdk/spdk_pid633863 00:32:38.825 Removing: /var/run/dpdk/spdk_pid634687 00:32:38.825 Removing: /var/run/dpdk/spdk_pid635093 00:32:38.825 Removing: /var/run/dpdk/spdk_pid635358 00:32:38.825 Removing: /var/run/dpdk/spdk_pid640331 00:32:38.825 Removing: /var/run/dpdk/spdk_pid646063 00:32:38.825 Removing: /var/run/dpdk/spdk_pid658744 00:32:38.825 Removing: /var/run/dpdk/spdk_pid659477 00:32:38.825 Removing: /var/run/dpdk/spdk_pid665217 00:32:38.825 Removing: /var/run/dpdk/spdk_pid665570 00:32:38.825 Removing: /var/run/dpdk/spdk_pid671279 00:32:38.825 Removing: /var/run/dpdk/spdk_pid678715 00:32:38.825 Removing: /var/run/dpdk/spdk_pid681813 00:32:38.825 Removing: /var/run/dpdk/spdk_pid695892 00:32:38.825 Removing: /var/run/dpdk/spdk_pid707714 00:32:38.825 Removing: /var/run/dpdk/spdk_pid709886 00:32:38.825 Removing: /var/run/dpdk/spdk_pid710972 00:32:38.825 Removing: /var/run/dpdk/spdk_pid732989 00:32:38.825 Removing: /var/run/dpdk/spdk_pid738126 00:32:38.825 Removing: /var/run/dpdk/spdk_pid795931 00:32:38.825 Removing: /var/run/dpdk/spdk_pid802682 00:32:38.825 Removing: /var/run/dpdk/spdk_pid810754 00:32:38.825 Removing: /var/run/dpdk/spdk_pid818306 00:32:38.825 Removing: /var/run/dpdk/spdk_pid818322 00:32:38.825 Removing: /var/run/dpdk/spdk_pid819327 00:32:38.825 Removing: /var/run/dpdk/spdk_pid820340 00:32:38.825 Removing: /var/run/dpdk/spdk_pid821379 00:32:38.826 Removing: /var/run/dpdk/spdk_pid822018 00:32:38.826 Removing: /var/run/dpdk/spdk_pid822148 00:32:38.826 Removing: /var/run/dpdk/spdk_pid822361 00:32:38.826 Removing: /var/run/dpdk/spdk_pid822627 00:32:38.826 Removing: /var/run/dpdk/spdk_pid822672 00:32:38.826 Removing: /var/run/dpdk/spdk_pid823675 00:32:38.826 Removing: /var/run/dpdk/spdk_pid824681 00:32:39.086 Removing: /var/run/dpdk/spdk_pid825691 00:32:39.086 Removing: /var/run/dpdk/spdk_pid826360 00:32:39.086 Removing: /var/run/dpdk/spdk_pid826362 00:32:39.086 Removing: /var/run/dpdk/spdk_pid826703 00:32:39.086 Removing: /var/run/dpdk/spdk_pid828071 00:32:39.086 Removing: /var/run/dpdk/spdk_pid829264 00:32:39.086 Removing: /var/run/dpdk/spdk_pid839867 00:32:39.086 Removing: /var/run/dpdk/spdk_pid871649 00:32:39.086 Removing: /var/run/dpdk/spdk_pid877629 00:32:39.086 Removing: /var/run/dpdk/spdk_pid879509 00:32:39.086 Removing: /var/run/dpdk/spdk_pid881645 00:32:39.086 Removing: /var/run/dpdk/spdk_pid881983 00:32:39.086 Removing: /var/run/dpdk/spdk_pid882148 00:32:39.086 Removing: /var/run/dpdk/spdk_pid882349 00:32:39.086 Removing: /var/run/dpdk/spdk_pid883062 00:32:39.086 Removing: /var/run/dpdk/spdk_pid885262 00:32:39.086 Removing: /var/run/dpdk/spdk_pid886257 00:32:39.086 Removing: /var/run/dpdk/spdk_pid886858 00:32:39.086 Removing: /var/run/dpdk/spdk_pid889491 00:32:39.086 Removing: /var/run/dpdk/spdk_pid890380 00:32:39.086 Removing: /var/run/dpdk/spdk_pid891094 00:32:39.086 Removing: /var/run/dpdk/spdk_pid896949 00:32:39.086 Removing: /var/run/dpdk/spdk_pid910144 00:32:39.086 Removing: /var/run/dpdk/spdk_pid914946 00:32:39.086 Removing: /var/run/dpdk/spdk_pid922654 00:32:39.086 Removing: /var/run/dpdk/spdk_pid924128 00:32:39.086 Removing: /var/run/dpdk/spdk_pid925961 00:32:39.086 Removing: /var/run/dpdk/spdk_pid931749 00:32:39.086 Removing: /var/run/dpdk/spdk_pid937138 00:32:39.086 Removing: /var/run/dpdk/spdk_pid947239 00:32:39.086 Removing: /var/run/dpdk/spdk_pid947241 00:32:39.086 Removing: /var/run/dpdk/spdk_pid953195 00:32:39.086 Removing: /var/run/dpdk/spdk_pid953530 00:32:39.086 Removing: /var/run/dpdk/spdk_pid953840 00:32:39.086 Removing: /var/run/dpdk/spdk_pid954197 00:32:39.086 Removing: /var/run/dpdk/spdk_pid954215 00:32:39.086 Removing: /var/run/dpdk/spdk_pid960251 00:32:39.086 Removing: /var/run/dpdk/spdk_pid960919 00:32:39.086 Removing: /var/run/dpdk/spdk_pid966691 00:32:39.086 Removing: /var/run/dpdk/spdk_pid969957 00:32:39.086 Removing: /var/run/dpdk/spdk_pid977006 00:32:39.086 Removing: /var/run/dpdk/spdk_pid983902 00:32:39.086 Removing: /var/run/dpdk/spdk_pid994360 00:32:39.086 Clean 00:32:39.086 23:20:56 -- common/autotest_common.sh@1451 -- # return 0 00:32:39.086 23:20:56 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:39.086 23:20:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.086 23:20:56 -- common/autotest_common.sh@10 -- # set +x 00:32:39.347 23:20:56 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:39.347 23:20:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.347 23:20:56 -- common/autotest_common.sh@10 -- # set +x 00:32:39.347 23:20:56 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:39.347 23:20:56 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:39.347 23:20:56 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:39.347 23:20:56 -- spdk/autotest.sh@395 -- # hash lcov 00:32:39.347 23:20:56 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:39.347 23:20:56 -- spdk/autotest.sh@397 -- # hostname 00:32:39.347 23:20:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:39.347 geninfo: WARNING: invalid characters removed from testname! 00:33:05.983 23:21:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:05.983 23:21:23 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.893 23:21:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:09.274 23:21:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:11.184 23:21:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.566 23:21:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:13.951 23:21:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:14.212 23:21:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.212 23:21:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:14.212 23:21:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.212 23:21:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.212 23:21:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.212 23:21:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.212 23:21:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.212 23:21:31 -- paths/export.sh@5 -- $ export PATH 00:33:14.212 23:21:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.212 23:21:31 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:14.212 23:21:31 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:14.212 23:21:31 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721856091.XXXXXX 00:33:14.212 23:21:31 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721856091.6thGgx 00:33:14.212 23:21:31 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:14.212 23:21:31 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:14.212 23:21:31 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:14.212 23:21:31 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:14.212 23:21:31 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:14.212 23:21:31 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:14.212 23:21:31 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:14.212 23:21:31 -- common/autotest_common.sh@10 -- $ set +x 00:33:14.212 23:21:31 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:14.212 23:21:31 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:14.212 23:21:31 -- pm/common@17 -- $ local monitor 00:33:14.212 23:21:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.212 23:21:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.212 23:21:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.212 23:21:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:14.212 23:21:31 -- pm/common@21 -- $ date +%s 00:33:14.212 23:21:31 -- pm/common@21 -- $ date +%s 00:33:14.212 23:21:31 -- pm/common@25 -- $ sleep 1 00:33:14.212 23:21:31 -- pm/common@21 -- $ date +%s 00:33:14.212 23:21:31 -- pm/common@21 -- $ date +%s 00:33:14.212 23:21:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721856091 00:33:14.212 23:21:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721856091 00:33:14.212 23:21:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721856091 00:33:14.212 23:21:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721856091 00:33:14.212 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721856091_collect-vmstat.pm.log 00:33:14.212 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721856091_collect-cpu-load.pm.log 00:33:14.212 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721856091_collect-cpu-temp.pm.log 00:33:14.212 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721856091_collect-bmc-pm.bmc.pm.log 00:33:15.154 23:21:32 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:15.154 23:21:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:15.154 23:21:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.154 23:21:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:15.154 23:21:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:15.154 23:21:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:15.154 23:21:32 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:15.154 23:21:32 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:15.154 23:21:32 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:15.154 23:21:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:15.154 23:21:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:15.154 23:21:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:15.154 23:21:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:15.154 23:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.154 23:21:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:15.154 23:21:32 -- pm/common@44 -- $ pid=1111164 00:33:15.154 23:21:32 -- pm/common@50 -- $ kill -TERM 1111164 00:33:15.154 23:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.154 23:21:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:15.154 23:21:32 -- pm/common@44 -- $ pid=1111165 00:33:15.154 23:21:32 -- pm/common@50 -- $ kill -TERM 1111165 00:33:15.154 23:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.154 23:21:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:15.154 23:21:32 -- pm/common@44 -- $ pid=1111168 00:33:15.154 23:21:32 -- pm/common@50 -- $ kill -TERM 1111168 00:33:15.154 23:21:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:15.154 23:21:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:15.154 23:21:32 -- pm/common@44 -- $ pid=1111190 00:33:15.154 23:21:32 -- pm/common@50 -- $ sudo -E kill -TERM 1111190 00:33:15.154 + [[ -n 492513 ]] 00:33:15.154 + sudo kill 492513 00:33:15.166 [Pipeline] } 00:33:15.184 [Pipeline] // stage 00:33:15.190 [Pipeline] } 00:33:15.208 [Pipeline] // timeout 00:33:15.213 [Pipeline] } 00:33:15.230 [Pipeline] // catchError 00:33:15.235 [Pipeline] } 00:33:15.252 [Pipeline] // wrap 00:33:15.258 [Pipeline] } 00:33:15.273 [Pipeline] // catchError 00:33:15.281 [Pipeline] stage 00:33:15.283 [Pipeline] { (Epilogue) 00:33:15.297 [Pipeline] catchError 00:33:15.299 [Pipeline] { 00:33:15.314 [Pipeline] echo 00:33:15.315 Cleanup processes 00:33:15.321 [Pipeline] sh 00:33:15.613 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.614 1111273 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:15.614 1111713 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.629 [Pipeline] sh 00:33:15.918 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.918 ++ grep -v 'sudo pgrep' 00:33:15.918 ++ awk '{print $1}' 00:33:15.918 + sudo kill -9 1111273 00:33:15.930 [Pipeline] sh 00:33:16.221 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:28.499 [Pipeline] sh 00:33:28.787 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:28.787 Artifacts sizes are good 00:33:28.804 [Pipeline] archiveArtifacts 00:33:28.812 Archiving artifacts 00:33:29.004 [Pipeline] sh 00:33:29.294 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:29.311 [Pipeline] cleanWs 00:33:29.322 [WS-CLEANUP] Deleting project workspace... 00:33:29.322 [WS-CLEANUP] Deferred wipeout is used... 00:33:29.330 [WS-CLEANUP] done 00:33:29.332 [Pipeline] } 00:33:29.353 [Pipeline] // catchError 00:33:29.366 [Pipeline] sh 00:33:29.656 + logger -p user.info -t JENKINS-CI 00:33:29.666 [Pipeline] } 00:33:29.683 [Pipeline] // stage 00:33:29.688 [Pipeline] } 00:33:29.707 [Pipeline] // node 00:33:29.711 [Pipeline] End of Pipeline 00:33:29.745 Finished: SUCCESS